Top Top Top

Questions and Comments on Machine Intelligence

THE FINAL INVENTION

 

I have been asked the following questions recently from media, and have prepared the following responses. My personal take is generally quite neutral, but an element of scaremongering has certainly been seized upon:

 

You claim “computer chips could soon have the same level of brain power as a bumblebee.”  And why do you think that this will be bad for people?

The latest developments in Neuromorphic Engineering (such as new Neurosynaptic chips by IBM) has the potential to enable a new form of computer processing that is much more similar to organic cognition. 

Neuromorphic technology simulates the processes through which organic brains function (neurons, and synapses). Until recently, Artificial Neural Networks have had to run in software, on standard computer hardware (the traditional Von Neumann/Harvard architecture that has been the basis of most computer hardware for the past 70 years). 

Now, for the first time, Neural Networks can run on dedicated hardware, which means that they can operate at a hugely increased speed and complexity.

I don't think that this is necessarily a bad thing - in fact this new wave of computing offers the potential for machines to be able to function in our physical world in a similar way to how humans do. Until recently, it has been extremely difficult to program machines with knowledge that we know intuitively, such as how to navigate a room with furniture in it, or to process human emotion.

However, machine intelligence is going to change our society in huge ways in the near-future, and has the potential to completely transform civilization as we know it within our lifetimes.

 

Can you give some specific examples of which kind of robots especially bad for people?

Over the next decade or so we can expect up to half of all jobs to be at risk of automation. Most automation requires some form of intelligence, and recent advances in machine intelligence are leading this wave.

The first major shifts will be in transportation, where automated vehicles will replace taxis and truck drivers, and in administration, such as Human Resources and the bureaucratic side of management. Technologies such as IBM Watson are going to create a large shift in Journalism and legal work. Even today, articles are being written by machine that are hard to distinguish from those written by a human. So, content creation, curation, and analysis jobs will soon be even fewer.

The changes brought by automation will hit every level of society. Few areas are safe from disruption. Traditional roles such as farming and food preparation will be hit, as will many roles in Finance and Medical Analysis (Vinod Khosla predicts that technology will replace 80% of what Doctors do today).

Some areas will be safe, at least for a while. Jobs with high Emotional Labor, such as nursing and sales will continue being human-oriented for a while longer. But not forever - Machines are beginning to understand human emotions, and it wont be long before they are even better at persuading us in various ways than a human can. The amount of information about our habits and interests that we share on social media for example, means that an intelligent machine could manipulate us quite easily - for better or worse.

Humans with extremely specific knowledge will be even more valuable, but only if they stay up-to-date. Some fields move so quickly (genomics for example), that by the time a student graduates, half of what they learned is already out of date. Flexibility, and lifelong learning will be essential character traits in years to come.

For the majority of society however, it will be a rough ride. Jobs will evaporate, as employment moves from collective labor (as in big companies), to more individualized labor (doing small tasks like a freelancer). Traditional education is designed for making people ready for jobs in big companies, not for teaching the entrepreneurial skills and savvy that will be essential to survival in the Automation Economy.

Today, about 50% of the world's working population is formally employed, the rest hustle for a living in various ways. We can expect the number of formally employed to drop to about 33% by the end of this decade. 

We're likely to see a more stratified society, with a few very wealthy knowledge workers working in smaller, more agile companies, and a large amount of professional hustlers earning whatever income they can to survive. It's a huge global shift, which will have profound effects on the Millennial Generation, along with taxation revenue, crime, and social unrest.

 

As everyone knows that, there is no way to stop robots technology. And what should we do? I mean in your perspective, we need to protection against robots. So what will be measures?

Progress in these areas seems inevitable, and this progress has the potential to ultimately be of great benefit to humanity. Machines can free us of many boring tasks and increase general wealth in society. However, Machine Intelligence is in many ways humanity's greatest gamble - there is potential catastrophic risk, along with huge rewards.

When Machine Intelligence has the ability to edit itself, to improve itself (which any self-learning intelligence needs to be able to do), it is very difficult to predict how that intelligence will evolve. A machine intelligence that is approximately as intelligent as a human will be easily able to design better versions of itself. At that point, it can very quickly grow to become super-intelligent, far beyond human level. 

If such a machine super-intelligence is friendly to humanity, it could act like a digital Messiah, helping to lead humanity in a better direction. Alternatively, if such a super-intelligent is not friendly to humanity, it could decide that human beings are not worthy of being respected or preserved.

Furthermore, a super-intelligence would be able to very quickly unlock advanced nanotechnology, enabling it to edit physical matter in the world through molecular assemblers, in theory 'turning sand into silicon chips'. This means that not only could it create it's own hardware and electrical power-generation capabilities (e.g. solar panels), but it could create new hardware upon which to expand it's intelligence further (very efficient and compact processors). Nanoassembler machines could grow exponentially, like bacteria, and be carried by winds to every corner of the planet, and beyond.

A rogue AI with nanoassembler capability could literally reform the world around it as it pleased.

This is a worst-case scenario, but it is possible. However, the point at which a super-intelligence event is likely to happen may be accelerating, as computer processing technology advances, and becomes more organic in nature. Our increased understanding of the human brain is leading to better processors, and these processors enable better tools through which to study ourselves. We may have less time to address these risks than we imagine.

Organisations such as the Machine Intelligence Research Institute and others are dedicated to reducing the risk of AI-related events. However, the majority of AI researchers around the world today either aren't fully aware of the risks, or don't particularly care. This needs to change.

There is also a serious lack of funding and human resources to combat the serious threat of unfriendly AI. We need to gather the best minds from all around the world to work on this problem. It needs to become a serious discipline of study. 

Regulation is definitely not the answer - AI research is easy to hide, and the extreme advantage that an AI which is loyal can provide to a particular group means that eventually it will happen anyway (perhaps Military research). The only way to secure our civilization against the threat of unfriendly AI is through co-ordinated international research efforts. Knowledge and foresight are our best defenses.

 

Do you agree with the idea of “AI could be more dangerous than nuclear weapons”. If yes, please explain why?

Potentially, yes, a super-intelligent AI is a greater threat to our existence than nuclear weapons. We can predict how humans will react to certain situations, and there have been events in the past where nuclear war seemed possible, and yet was avoided because of human intervention

However, the intentions, actions, and ethics of a super-intelligent AI are extremely difficult to predict, and the emergence of such an intelligence is likely to change our world forever (perhaps in a good way, perhaps in a bad way, and perhaps in a painful way that may still be a better ultimate outcome for us all).

I consider it useful that society better understands the risks ahead of us, so that more resources can be given to help to make plans to safeguard humanity. However, one must balance increasing awareness of risk against the danger of spreading unnecessary fear of science and technology. Caution can be useful, but panic is certainly not.

  

You support that “Robots could murder us out of kindness unless they are taught the value of human life”. Can you please briefly explain this idea?

I consider it very important to teach machine intelligence not only a respect for human life, but for human values (as we perceive them, and live them in our daily lives).

Even a very 'kind' and ethical super-intelligence might still decide that it is in the best interests of all life on Earth that human civilization as we know it must end. Perhaps such an intelligence would decide that only a small portion of humanity should be preserved, within a 'human zoo', for our own protection, and for the protection of other species.

The transition to super-intelligence could happen very suddenly, and will be very difficult to manage. We know that there is a very strong likelihood of facing this challenge within our lifetimes. Therefore, I believe that it is important that we take great care to consider the best ways to manage this event, long before it happens.

 

You have expressed concerns in the past that robots may 'kill us out of kindness', and that we will need to 'teach' robots values to avoid scenarios like that. When do you think we will need to start doing that?

 

There are two ways of looking at the danger of autonomous systems.

In the near term, we will start to see autonomous systems that have the capability of making decisions that may have a huge effect upon our lives, but these will appear rather mundane.

For example, self-driving vehicles are already a reality, and are currently licensed for live use along the entire length of a journey in certain states in the U.S.

If a child runs in front of your autonomous vehicle, it may need to decide whether to crash itself in order to protect the pedestrian, potentially killing you, or your own child who may be a passenger. Computers are very good at making split-second decisions, but they are terrible at reasoning. There is a need to find ways to instruct machines in ethics and values so that they are capable of making decisions that are both fast, and wise.

Initially, this will likely be accomplished through the use of simple rules (heuristics) that state “if scenario ‘x’, do ‘y’”. Over time however, as autonomous systems are given greater responsibilities and become more involved in our personal affairs, simple instructions will not be enough. Complex sets of ethical rules (sometimes described as ‘deontic logic’) will be required to appreciate nuanced situations from multiple perspectives.

Autonomous systems are here to stay, and will become increasingly intertwined in our daily lives. To ensure that this blending of human and machine intelligence is a healthy one, we need to urgently find ways of specifying values and ethics (often thought of in rather woolly terms by humans) in ways that machines can readily interpret.

A foresee the greatest problem not in amoral machines, but rather in ones that we have taught morality to, who can therefore perceive the great moral inconsistencies within human society. For example, they way that we treat some people differently, or apply violence to situations, or pet a dog yet eat a pig. I have a concern that machines may eclipse human morality, and thereby come to view us as possessing moral blindness, acting as relative sociopaths, compared to a perfectly universally moral machine.

We can scarcely imagine how such a supermoral agent would view us, and judge our daily actions.

Lethal autonomous killing machines are rapidly becoming a reality. Do you think these machines have a place in warfare?

Lethal semi-autonomous killing machines have been a reality for some time, with drones and cruise missiles. There has always been someone nominally controlling it however. Autonomous robotics will be commonly used within warfare in a very short period of time, generally as a robotic ‘packhorse’ for equipment whilst a squad is on deployment.

Warfare is becoming increasingly asymmetrical over time, with the lines between civilian and insurgent more blurred today than at any point in history. In the far future, robots could be very skilled interrogators, able to detect if someone is lying for example. However, it will be a long time before autonomous machines are truly valuable in this kind of environment, as the potential liabilities from false positives are high.

In the mid-term, lethal autonomous systems will be deployed, but only in ‘hot’ situations where there are very few civilian actors in the surrounding environment.