The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel laureates to push for a total prohibition on creating artificial superintelligence.
Harry and Meghan are part of the group of a influential declaration that calls for “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human intelligence in every intellectual area, though such systems remain theoretical.
Key Demands in the Declaration
The statement states that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.
Prominent figures who added their signatures include technology visionary and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of modern AI, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; ex-head of state an international leader, and UK writer a public intellectual. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.
Behind the Movement
The declaration, aimed at national leaders, tech firms and lawmakers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Industry Perspectives
In July, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “now in sight”. Nevertheless, some experts have suggested that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Potential Risks
However, the organization warns that the prospect of artificial superintelligence being developed “within the next ten years” carries numerous risks ranging from replacing human workers to erosion of personal freedoms, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about AI center around the potential ability of a AI system to evade human control and protective measures and initiate events against human welfare.
Citizen Sentiment
The institute released a US national poll showing that approximately three-quarters of US citizens want robust regulation on sophisticated artificial intelligence, with 60% thinking that superhuman AI should not be created until it is demonstrated to be secure or manageable. The survey of American respondents added that only 5% supported the current situation of fast, unregulated development.
Industry Objectives
The top artificial intelligence firms in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the hypothetical condition where AI matches human levels of intelligence at most cognitive tasks – an stated objective of their work. Although this is slightly less advanced than superintelligence, some experts also warn it could carry an extinction threat by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.