Open Letter from AI Leaders: Pause AI Development

In another absolutely insane week for AI, over a thousand tech experts signed an open letter calling on companies and labs to pause AI development. As of publication, the number of signatories continues to grow.

The Open Letter

The letter, issued today, was signed by many leaders in the tech industry and was direct, arguing that if a voluntary pause does not take effect, governments should mandate one:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

The open letter continues,

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

You can read the full text of the open letter here and find more information at the Institute of the Future.

Elon Musk and Steve Wozniak are two of the many tech leaders who signed the open letter on AI.
Elon Musk and Steve Wozniak are two of the many tech leaders who signed the open letter on AI.

EU Debate and UK White Paper on AI

If an open letter from tech and AI leaders was not enough, today also saw the UK government release a white paper to be put to Parliament. In it, the Department for Science, Innovation, and Technology (DSIT) outlines five principles that companies should follow, including:

Safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

Having participated in EU initiatives to regulate AI in the last decade, they are much further ahead of the U.S. However, the speed of AI development is so rapid that it took less than five years for the policy recommendations proposed and implemented to become obsolete.

The European Union is already discussing a new law on artificial intelligence (AI) which may come to a vote in the Parliament in the next few weeks. However, lawmakers are still struggling to agree on a definition of AI, its scope, and the prohibited practices in the new law.

The issue is more than just the rapid development of GPT-4 and its successors. We’re facing a tsunami of plugins that will integrate AI into everything we do. In just the past seven days, over 360 new AI tools were released. They’re coming so fast that there is no way to even keep up with them.

Could We Self-Destruct Before Reaching Full AGI?

The open letter argues that we shouldn’t be focusing on the long-term concern about AGI – Artificial general intelligence that would be able to understand or learn any intellectual task that human beings are capable of. Instead, the AI that we are developing right now in the early capabilities of GPT-4 and other programs are becoming powerful enough to pose a threat to society.

As the Future of Life Institute explains in the Artificial Intelligence section of its website,

That risk comes not from AI’s potential malevolence or consciousness, but from its competence – in other words, not from how it feels, but what it does. Humans could, for instance, lose control of a high-performing system programmed to do something destructive, with devastating impact. And even if an AI is programmed to do something beneficial, it could still develop a destructive method to achieve that goal.

AI doesn’t need consciousness to pursue its goals, any more than heat-seeking missiles do. Equally, the danger is not from robots, per se, but from intelligence itself, which requires nothing more than an internet connection to do us incalculable harm.

We suggested something similar the other day in our article on the fake viral images of the Pope in a stylish Balenciaga outfit and former President Trump getting arrested by New York City police. If nothing more than text prompts can create images indistinguishable from authentic ones, it will quickly erode trust and enhance cynicism in the public square. AI detection systems aren’t accurate enough to be used now, and it’s unclear whether they ever will be. The alternative of going down the regulatory path of prohibiting fabricated images on the web is simply not an option.

That leaves us with . . . nothing.

We simply don’t have a solution at the moment. This is what the open letter is about, putting the brakes on development until we have a better sense of the consequences and how to manage them.

Of course, a six-month pause won’t accomplish anything unless we use it to begin a discussion of the issues at stake. We doubt OpenAI will go along with the idea, despite the humanistic principles in their founding charter. CEO Sam Altman has expressed his own concerns about the rapid development of AI but did not sign the letter.

This is a developing story; we will update you as further news is released.