Summary:
-
More than 800 public figures, including Prince Harry and Steve Wozniak, urge a ban on superintelligent AI development.
-
The declaration calls for a global prohibition until there’s scientific consensus and public buy-in.
-
Concerns about AI advancement and risks have led to calls for regulation from diverse leaders.
More than 800 public figures across science, politics, tech, and entertainment—including Prince Harry, Apple co-founder Steve Wozniak, AI researcher Geoffrey Hinton, and former Trump strategist Steve Bannon—have signed a statement urging a ban on the development of artificial superintelligence.
Prince Harry and Meghan Join Global Call to Ban Development of Superintelligent AI
Prince Harry and Meghan Markle have joined global AI experts and Nobel laureates in demanding a ban on the creation of “superintelligent” artificial intelligence systems.
The Duke and Duchess of… pic.twitter.com/wYr10w6M9N
— Instablog9ja (@instablog9ja) October 22, 2025
The declaration, reported by The Financial Times and published by the Future of Life Institute, calls for a global prohibition on any AI advancement that could lead to superintelligent systems until there’s “broad scientific consensus” that it can be achieved safely, and with “strong public buy-in.”
“We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?’” said Anthony Aguirre, executive director of the Future of Life Institute, in an interview with NBC News.
The signatories include a surprising mix of leaders: Nobel laureates, military officials, CEOs, artists, and activists. Alongside Wozniak and Harry are figures like retired Admiral Mike Mullen, musician Will.i.am, and entrepreneur Richard Branson. Each has voiced growing unease about the speed at which AI research is advancing, and the potential societal risks if it surpasses human intelligence.
Artificial general intelligence, or AGI, refers to a system capable of reasoning and performing any intellectual task at human level. Superintelligence, the next step, would exceed human expertise entirely—a concept that’s long inspired both Silicon Valley ambition and sci-fi dread.
ADVERTISEMENT
Despite skepticism that such technology is imminent, major companies like OpenAI, Meta, and xAI are investing billions in new models and infrastructure. Meta CEO Mark Zuckerberg recently claimed that “superintelligence is in sight,” while Elon Musk said it’s “happening in real time.” OpenAI’s Sam Altman has predicted superintelligence could arrive by 2030.
Notably, none of these leaders—or anyone from their companies—signed the statement.
This isn’t the first call for an AI slowdown. In September, over 200 researchers and public figures—including 10 Nobel Prize winners—released a separate appeal warning of existing AI risks like job loss, disinformation, and human rights abuse. That letter didn’t address superintelligence directly but underscored growing fears that the technology is outpacing governance.
The Future of Life Institute, founded in 2014 and previously backed by Elon Musk, has become a leading voice in AI ethics debates.
