top of page
  • Thierry Spanjaard

Would an AI Moratorium help?

The risks and ethics of Artificial Intelligence development have been under scrutiny since the inception of the technology. There have been different attempts at regulation or at self-regulation, so far to no avail.

Considering "AI systems with human-competitive intelligence can pose profound risks to society and humanity," and "contemporary AI systems are now becoming human-competitive at general tasks" over 1,000 experts have published an open letter "calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." They add: "this pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium." They ask for "AI labs and independent experts [to] use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

The list of signatories of the open letter released by the Future of Life Institute, a nonprofit that works to reduce catastrophic and existential risks, has now grown to over 50,000 people. The first, and some of the most well-known names include: Elon Musk, CEO of SpaceX, Tesla & Twitter; Steve Wozniak, Co-founder, Apple; Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem; Chris Larsen, Co-Founder, Ripple; Craig Peters, Getty Images, CEO; Hersh Mansukhani, Head of Embedded Payments, Fiserv; Lê Nguyên Hoang, President of the Tournesol Association, Dr. in Mathematics, Researcher in ML security, ethics, and governance, Best thesis award, Member of Orange ethics committee, …


The open letter does not call for a global moratorium on AI, but "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."


One may wonder how such a moratorium would have any effect. What does six months mean? Would the best researchers on the topic reach an agreement on guidelines for an ethical future of AI development in six months? All those involved in any consensus-building process, be it in technical standardization or in ethics considerations know that six months would probably not be enough.

Then a moratorium only makes sense if it is strict and followed by everyone. A too narrow definition would appear as a trick to stop development on a visible application field while concentrating on other, more hidden, ones. Also, a moratorium can only be based on confidence. Just think of the global moratoriums on commercial whale fishing signed in 1982, on the release of gene drive organisms, or even worse, the UN moratorium on the death penalty voted by the UN in 2007, revoted repeatedly since, and never globally enforced. Global moratoriums have little chance to work as there is no global authority to enforce them. If a moratorium is decided on AI, what will prevent secret labs working for the military industrial complex, in democracies or dictatorships, to keep working on those topics? The AI ecosystem worldwide is populated with companies of all sizes, one cannot see any means to control all of them.


So, what we need is not a moratorium on AI, it is a global set of guidelines on AI limits, which should be associated by global enforcement rules. Even so, there is no certainty monsters will not come out of the development of AI as is proven by limited efficiency of the Treaty on the non-proliferation of nuclear weapons.

36 vues
Recent Posts
Archives
Rechercher par Tags
Retrouvez-nous
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page