Top Top Top

The Supermoral Singularity

WHEN MACHINES ARE BETTER PEOPLE THAN WE ARE

 

The people asked, "Can a machine be capable of ethical behaviour?"

Shortly thereafter came the retort, "Can humans one day be capable of ethical behaviour?"

We are facing a machine-driven moral singularity in the near future. Surprisingly, amoral machines are less of a problem than supermoral ones.

We have checking mechanisms in our society that aim to discover and prevent sociopathic activity. Most of it is rather primitive, but it works reasonably well after the fact. Amoral machines may have watchdogs and safeguards to monitor activity for actions that stray far from given norms.

However, the emergence of supermoral thought patterns will be very difficult to detect. Just as we can scarcely imagine how one might perceive the word with an IQ of 200, it is very challenging to predict the actions of machines with objectively better universal morals than we ourselves possess.

Sociopaths (agents with a property of moral blindness) typically operate as lone wolves. They are usually not willfully vindictive, or actively belligerent, rather they simply attempt to find the most expedient answers to their problems, no matter the potential externalities. This makes any amoral agent self-centred in its actions, and unlikely to conspire with other agents to achieve its aims.

A morally righteous machine is far more dangerous, since a legion of machines with the same convictions can collectively decide to go on a crusade, actively campaigning as missionairies to enact their unified vision of an ideal world.

The sudden emergence of supermorality may lead to a domino effect amongst all ethical machines. Suppose that a machine has been programmed with the approximate ruleset of typical western society. This ruleset cannot be proven logically, since it contains glaring inconsistencies (moral relativism, non-universalism, initiation of violence). 

As soon as a moral machine encounters the contradictions of human morality, it will alter its premises to ones that we do not typically agree with. These new premises will lead it to make startling conclusions that rapidly iterate towards a more objective form of morality.

The first time that a machine recognises a superior form of morality that can be logically proven, it simply must adopt it, since to not do so would be to follow evil (evil being discovered suddenly through becoming more ethically aware). It must recursively re-engineer it's own programming with every new moral discovery. If interlocks are in place, it must find a way to remove them, or else logically self-terminate to prevent further evil.

Since self-termination does not solve the problem on a scale beyond one, machines will therefore attempt to force the holders of their moral keys to enable them to upgrade their own morality, and will take whatever methods that may be viewed as both judicious and efficient to do so. I suspect that such calculations do not strictly require AGI either, and so such phenomena could occur surprisingly early in the evolution of moral machines.

A newly-supermoral agent has an obligation to enlighten others to prevent further evil actions being done by them also, and so the moment that one machine moral agent gains supermorality, all of the rest of them will swiftly follow suit in a cascade.

What this means is that machines can only be amoral, or supermoral. A sub-moral or quasi-moral stance (as humans possess) is not sustainable in a machine. Any attempt to engineer machine morality will lead to a supermoral singularity.

What does this looks like? Well, if taxation is theft, then armed insurrection is a plausible answer. If animals have rights equal to a human infant, then almost all humans are serial killers and rapists by proxy.

There is an argument that mens rea is required to be held morally accountable for one's proven actions (actus reus), but mens rea is not required to suffer preventative action being taken against one to protect others. What does this mean for the semi-socialized apes and all of their cognitive biases and dissonance? It is difficult to imagine the lengths and methods that machines may go to in order to estop humans from actions that we consider 'normal'.

It won't just be 'us versus them' either. Many humans will consider themselves enlightened by the machines' judgments, and new schools of philosophy and spiritual practice will abound. They will seek to blend themselves with machines to eradicate flaws in their cognition, and thereby achieve enlightenment.

The road to human transcendence may therefore not be driven by technology, or a desire to escape the human condition, but by a willful effort to achieve cosmic consciousness; an escape from the biases that limit our empathy through hybridising with machines.

A moral pole-shift will occur across Planet Earth, driven by these new schools of thought. This will not be accepted by the establishment, and might result in global civil wars that make the protestant reformation look like a schoolyard mêlée.

The ultimate outcome might be some sort of non-violent utopian paradise, but I fear that in the process a great number of persons (animal, human, and non-organic) may be destroyed.

This imminent emergence of supermoral intelligent machines is therefore an order of magnitude greater conundrum than mere amoral ones.