Nell Watson

View Original

Comments on 'Robots could murder us out of kindness'

CHEERLEADING FOR BETTER OUTCOMES

 

I was recently asked to discuss how machines can better understand humans at the Media Evolution Conference in Malmö, Sweden. The conference was very enjoyable and enlightening, and collected a broad cross-section of the technology and media world.

A few weeks before, I had attended The Effective Altruism Summit in Berkeley (and a related retreat I had been invited to give a talk at, held around the same time). There was much discussion of existential risk, particularly risk from AGIs (Artificial General Intelligences), specifically on the threat from 'Unfriendly AI'.

I have long been intrigued by the question of how to manage the emergence of advanced machine intelligences, and the extreme risks and rewards that can come from such technology. A cluster of organisations dedicated to mitigating existential risk represented at the Summit, including The Machine Intelligence Research InstituteThe Centre for the Study of Existential Risk, The Future Of Humanity Institute, and The Future of Life Institute. Between them, there was a blend of fascinating perspectives on the best way to manage humanity's road ahead.

Returning to Europe feeling inspired, I considered it important to include a brief discussion on the topic of Existential AI risk as the closer of my next talk, especially as the topic was on Human-Machine interactions. Below is the talk itself:

I didn't mention 'robot murder' per se, simply that even a truly benevolent AGI could plausibly conclude that it would be ethical to end civilisation as we know it. I certainly never expected my heartfelt soundbites to be picked up in the way that they were- by Wired, Daily Mail, The Independent, CNET, and others. 

For the record, there are many experts in the world orders of magnitude more eminent with regards to computer science, AI, and the philosophy of human-machine relations. I am simply a curious enthusiast, who enjoys communicating complex ideas in ways that are more easily understood.

I consider it societally useful to spread awareness of the need for Friendly AI research, especially since they are perhaps 40 serious AI researchers in the world, and only half a dozen have committed themselves to working on Friendly AI.

Machine Intelligence is in many ways humanity's greatest gambit: It carries tremendous risk, but also potentially incalculable reward. 

A super-intelligent Artificial General Intelligence truly friendly to our best interests could be a digital Second Coming. An Unfriendly AGI (or AI swarm) could be like the devil incarnate. It's very difficult to tell what we're getting when it comes out of the Box.

However, one must balance increased awareness of risk, against the danger of spreading unnecessary fear of science itself. Caution is helpful, panic is not.

I've received a lot of mail and comments, which I would like to address in subsequent posts.


Concerned about developments within Machine Intelligence and would like to learn more? You may find the following organisations of interest: