Self Referential AI
It all started with a group email that I received …. ….
“I found this list of 2015 Project Grants in AI interesting, not least because of the VRM angle some of the projects might have.”
.. and then provided a link to a pile of people who all have grants in the world of AI.
Then another person followed through that link - and reads one of the bios and synopses ( I assume some - since there were a lot - and none of us have time to read everything. But one thing she did was extract this (commenting about how worrisome this is)
Humans take great pride in being the only creatures who make moral judgments, even though their moral judgments often suffer from serious flaws. Some AI systems do generate decisions based on their consequences, but consequences are not all there is to morality. Moral judgments are also affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems. Our goal is to do just that. Our team plans to combine methods from computer science, philosophy, and psychology in order to construct an AI system that is capable of making plausible moral judgments and decisions in realistic scenarios. ….
And almost by return came …
Not to worry. Their AI said this is all perfectly OK.