January 18, 2017

Has Anyone In The EEC Read Asimov

John recalls the following common knowledge

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The ‘laws’ were postulated by one Isaac Asimov back in 1942. Over the years they have served us well. And then god damn it - we went and started actually developing robots in real life.

Turns out that those three laws are not sufficient. Well, at least according to The Guardian. We now have chaps - not these two chaps, I hasten to add … we are talking other chaps … over in the EU … that are taking it all a step further.

Turns out that as they push for ‘UBI’ (that is universal basic income for  humans), there is a parallel exercise going on to provide ‘human rights’ for robots.

Wondering if that includes UBI for all robots - and whether the robots will be paying tax - like all other humans do.

But putting flippant aside for the moment … consider this conundrum …

If I create a robot, and that robot creates something that could be patented, should I own that patent or should the robot? If I sell the robot, should the intellectual property it has developed go with it?

And trust me, this is fast becoming non-hypothetical. For example, did you know that Google’s AI translation tool has realised that us poor humans were doing such a bad job at programming, that it went off and wrote its own language to improve its own translation abilities. No? Neither did I! But it did.

Graham looks to the (speculative) future …

… also a good way of avoiding engaging directly with the real question, yes? But no. A little explanation – before he moved to the USA unexpectedly, some (mumble mumble) years ago, Graham’s knowledge of the old colony came mainly from speculative fiction and satire … largely, the frequently-scabrous but surprisingly-predictive National Lampoon. Which proved surprisingly useful. After all, in such forms, you can only ride the Zeitgeist, surf the waves of top-of-mind concerns. Skip off those to the wonky, detail stuff, and satire dies … as does thinly-disguised social criticism (aka speculative fiction).

Which brings us, finally, to AIs. Maybe some idea of the possible solutions to AI-human problems we have not fully defined yet may be gleaned from fiction (especially comic fiction), which treats as matter-of-fact a world where AIs and humans co-exist; not necessarily comfortably, though. What might the relative roles and functions look like? What level of interaction, how would they “use” each other?

To that end, Graham draws the reader’s attention to one of his favorite daily comics, Questionablecontent.net, which does indeed consider those issues, with particular emphasis on relationships.

So, how do you comfort your android friend, former (and now retired) feared warrior, now dealing with buried traumas far more powerful and dangerous then PTSD?

Also a crash course in AI technology and ethics, if you’re interested.

John is always thankful …

… for finding even more feeds to add to his morning news. No really. He is.


archive.aat


Previous post
Fat finds favor on U.S. tables again … or at least that is what The San Francisco Chronicle would have us believe in this article. … and is wondering if this might explain a ‘thing or
Next post
It’s All About … Control John caught this little interesting news item this morning … You can read all about it here .. but the key take point is the opening paragraph.