July 8, 2015

Self Referential AI

burning-bridgeburning-bridgeIt all started with a group email that I received …. ….

“I found this list of 2015 Project Grants in AI interesting, not least because of the VRM angle some of the projects might have.”

.. and then provided a link to a pile of people who all have grants in the world of AI.

Then another person followed through that link - and reads one of the bios and synopses ( I assume some - since there were a lot - and none of us have time to read everything. But one thing she did was extract this (commenting about how worrisome this is)

Humans take great pride in being the only creatures who make moral judgments, even though their moral judgments often suffer from serious flaws. Some AI systems do generate decisions based on their consequences, but consequences are not all there is to morality. Moral judgments are also affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems. Our goal is to do just that. Our team plans to combine methods from computer science, philosophy, and psychology in order to construct an AI system that is capable of making plausible moral judgments and decisions in realistic scenarios. ….

And almost by return came …

Not to worry. Their AI said this is all perfectly OK.


🌉 archive.bb


Previous post
Price and Value Spot On. I have long argued that the idea of charging cost plus margin, by the hour/day is for the birds. Value is where it is at. The challenge is
Next post
Success Silos Just Do Not Work If you track my thoughts, you will know that I am a big believer in ‘breaking down the silos’. I mean REALLY break - not pay lip service to it -