Prince Harry, Steve Wozniak, and Geoffrey Hinton Join 800 Others Calling for Global Ban on AI ‘Superintelligence’

URL copied to clipboard.
Prince Harry, Duke of Sussex, the Patron of the Rugby Football League hosts the Rugby League World Cup 2021 draws at Buckingham Palace on January 16, 2020 in London, England. The Rugby League World Cup 2021 will take place from October 23rd through to November 27th, 2021 in 17 cities across England.
Jeremy Selwyn - WPA Pool/Getty Images

Summary:

  • More than 800 public figures, including Prince Harry and Steve Wozniak, urge a ban on superintelligent AI development.

  • The declaration calls for a global prohibition until there’s scientific consensus and public buy-in.

  • Concerns about AI advancement and risks have led to calls for regulation from diverse leaders.

More than 800 public figures across science, politics, tech, and entertainment—including Prince Harry, Apple co-founder Steve Wozniak, AI researcher Geoffrey Hinton, and former Trump strategist Steve Bannon—have signed a statement urging a ban on the development of artificial superintelligence.

The declaration, reported by The Financial Times and published by the Future of Life Institute, calls for a global prohibition on any AI advancement that could lead to superintelligent systems until there’s “broad scientific consensus” that it can be achieved safely, and with “strong public buy-in.”

We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?’” said Anthony Aguirre, executive director of the Future of Life Institute, in an interview with NBC News.

The signatories include a surprising mix of leaders: Nobel laureates, military officials, CEOs, artists, and activists. Alongside Wozniak and Harry are figures like retired Admiral Mike Mullen, musician Will.i.am, and entrepreneur Richard Branson. Each has voiced growing unease about the speed at which AI research is advancing, and the potential societal risks if it surpasses human intelligence.

Artificial general intelligence, or AGI, refers to a system capable of reasoning and performing any intellectual task at human level. Superintelligence, the next step, would exceed human expertise entirely—a concept that’s long inspired both Silicon Valley ambition and sci-fi dread.

ADVERTISEMENT

Despite skepticism that such technology is imminent, major companies like OpenAI, Meta, and xAI are investing billions in new models and infrastructure. Meta CEO Mark Zuckerberg recently claimed that “superintelligence is in sight,” while Elon Musk said it’s “happening in real time.” OpenAI’s Sam Altman has predicted superintelligence could arrive by 2030.

Notably, none of these leaders—or anyone from their companies—signed the statement.

This isn’t the first call for an AI slowdown. In September, over 200 researchers and public figures—including 10 Nobel Prize winners—released a separate appeal warning of existing AI risks like job loss, disinformation, and human rights abuse. That letter didn’t address superintelligence directly but underscored growing fears that the technology is outpacing governance.

The Future of Life Institute, founded in 2014 and previously backed by Elon Musk, has become a leading voice in AI ethics debates. 

More headlines