Nell Watson

View Original

Statement on the Open Letter on AI

When Moving Fast Could Break The World

I was one of the first set of signatories for the Future of Life Institute’s Open Letter on AI. I'm glad that the letter has made a stir, and opened public debate on this topic. A number of folks noticed my name and asked me why I had signed it. Below is my statement on that decision.

However, humanity has in the past successfully negotiated nuclear test ban treaties, ozone layer treaties, and acid rain treaties, and significantly resolved decades-long conflicts, such as in Northern Ireland. Perhaps we can obtain similar wins with the governance of responsible AI, if there is sufficient support. For better or worse, technologies such as nuclear energy and genetic engineering have been shackled as a result of activism, and (for better or worse again) something similar could occur with AI.

This will no doubt have serious tradeoffs. Nuclear power is extremely clean and safe in general compared with alternatives, producing fewer radioactive emissions than coal power. Genetically modified Golden Rice could prevent millions of deaths and blindnesses due to malnutrition. The temporary loss of these technologies has serious consequences. However, both are headed for a renaissance, now that we understand better how to use them safely.

I confess to having some doubts about the actionability of a moratorium on AI, especially as less scrupulous actors are especially unlikely to heed it, and the fact that recent developments such as Alpaca and AutoGPT have been driven by independent engineers, enabling bedroom AI dabblers. However, the presence of fully-developed moratorium on new AI releases could indeed have a chilling effect on development, especially in there is a public (and regulatory) outcry over organizations or individuals deemed to have defected from the agreement by introducing new capabilities.

If reinforced as a major taboo, irresponsible AI development without the careful scrutiny of safety and responsibility specialists could lead to losing access to important resources such as compute clouds. A competition is already underway to design articles for an international treaty aimed at reducing the speed of AI to a once again manageable level.

The pace of AI development in the past months has been frenetic, and is accelerating further at an incredible pace, one that not even AI researchers can kept abreast with. This is leading to serious burnout. A six month break in the release of new capabilities would allow time for researchers as well as the public to better adjust to these developments, to distil new public education content, and to understand the impact on employment, as well as the implications for increasing algorithmic management of staff. It will also provide a chance for new cryptographic, transparency and auditing technologies to be developed to help mitigate the negative effects of AI capabilities, and the designs of bad actors.

It’s ultimately in the interest of business to support this initiative. Presently, they cannot adapt quickly enough to new developments, and any initiatives they launch in AI are liable to be eclipsed by new releases in capability that render them meaningless. A pause will provide a chance to build upon the foundations already laid, before progressive waves of sudden disruption, during a time of looming economic crisis.

There are potentially even greater stakes to broader society. The AI can of worms increases the potential attack surface of individuals, and societies at least. Cyber security challenges of voice cloning and automated conversations present a serious threat to social trust and wellbeing. Fifth-generation demoralization warfare techniques can also take advantage of AI to manipulate and demoralize people in target nations, undermining them through zersetzung attacks until they collapse from within. 

Moreover, the proliferation of AI brings these technologies to non-state actors, such as terrorists and hate groups. Alpaca demonstrates that large models can serve as a powerful training aid, enabling much smaller models costing a few hundred dollars in cloud training credits to perform comparably with massive ones. Alpaca was enabled by the accidental, though perhaps inevitable, release of Meta's LLaMA model. Though intended only for researchers, it was almost immediately leaked to the wider world. Similarly, people's conversations with GPT have also recently been discovered to be leakable, illustrating other potential looming scandals if stronger steps aren't taken to secure these systems which are becoming indispensable to millions of people.

Next come chat plugins and toolformers – with AI is now able to take meaningful action within operating systems, and even the physical world. This will create a minefield of even greater security hazards which we are ill-prepared for, and which are liable to foment a moral panic amongst the public, as it becomes impossible to avoid AI in daily life. AutoGPT and BabyAGI show the potential for agentized systems of unprecedented capability to take a life of their own, perhaps even with objectives which are explicitly and wantonly hostile to humanity.

It’s cute to see my little meme from a few years ago lately show up all sorts of places.

AI is special, because it self-reinforces. We are now at the point where AI systems can design better versions of themselves, and more powerful hardware to run upon. Sophisticated AI systems are now able to make sense of chaotic systems, and generate ordered ones (and also the precise inverse). Contemporary AI is deceptively powerful, and we do not understand how it functions, or the full depths of present or future capabilities. Unlike other dangerous technologies, such as nuclear or biological ones, the resources and education required are accessible to practically anyone with the interest to dabble in it. Hundreds of millions of new AI users have arisen in the past half a year, producing a wave of panic amidst incumbent tech ventures who fear being swept aside by upstarts.

In the race to develop and deploy AI, the major tech companies have dropped their ethics and safety teams, now of all times to do so, which is reckless in the extreme. It's important to take stock of where the recent developments have taken us, and to meaningfully choose where we want to go from here, instead of simply allowing things to happen. The responsible future of AI requires vision, foresight, and courageous leadership that upholds ethical integrity in the face of more expedient options.

A pause can be especially helpful to discover ways in which AI is being used in unfortunate ways, and how to take steps to mitigate that, something which IEEE's standards and certifications in responsible AI have so much to offer to the world. I am proud to serve as an AI Ethics Maestro in IEEE's strong safety ecosystem, which continues to evolve to serve new niches in the responsible governance of AI. I highly recommend that folks engage with IEEE's GET Program, which provides pro bono access to several of the best defenses in IEEE's arsenal to protect against risks from AI.

AI has the potential to bring wonderful things, such as mitigating disabilities. But technology is often a dark bargain, one which liberates at first, only to later bind us ever tighter to it.

My friend Michael Michalchik observes that of all the allegories to be made between nuclear weapons and advanced AI, the most poignant may be the ‘demon core’ nuclear incidents which killed several two seemingly very clever people, and maimed others. The most noteworthy element of demon core incidents is that it's not just one incident, but two. Louis Slotin, the scientist that triggered the worst one had actually not only seen his friend Harry Daghlian die an awful agonising death from the same device. Despite witnessing this, and repeated warnings, he repeated these experiments with other experts around. In an achievement oriented environment recklessness easily becomes normalized. Experts themselves cannot be trusted not to be misaligned from common sense goals of humanity.

These same experts are summoning machine spirits to do their frivolous bidding, be they devils or angels. We need to have an open discussion as society on whether we, in our naïve hubris, should be even allowing such easily corruptible entities to exist at all. Do we dare consort with genies who can easily seduce their way out of confinement? 

Continued acceleration seems certain to constrains the possible destinies of our species to a single foregone conclusion. In the few months between GPT-3.5 & GPT-4 performance on college physics problems leapt from the 39th to the 96th percentile of human level performance. The present trajectory leads to highly manipulative and sophisticated AI by default.  It will assuredly drive us mad, and likely end civilization as we know it, perhaps taking our species with it. There can be no avoidance of dealing with this reality in our lifetimes.

Humanity is giving birth to a new machine species. That's an endeavour far larger than any one person, venture, or nation. Coordinating globally to hit the brakes hard seems reasonable to me.