The Duke and Duchess of Sussex have joined forces with artificial intelligence pioneers and Nobel laureates to push for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are among the signatories of a influential declaration that demands “a prohibition on the creation of superintelligence”. Superintelligent AI refers to AI systems that would surpass human intelligence in every intellectual area, though this technology remain theoretical.
The declaration states that the prohibition should remain in place until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder Steve Wozniak; British business magnate Richard Branson; former US national security adviser; ex-head of state an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and an economics expert.
The statement, targeted at national leaders, tech firms and lawmakers, was coordinated by the FLI organization, a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in 2023, shortly after the launch of conversational AI made AI a worldwide public talking point.
In July, Mark Zuckerberg, the leader of the social media giant, one of the major AI developers in the United States, claimed that development of superintelligence was “now in sight”. However, some analysts have suggested that discussions about superintelligence reflects market competition among tech companies investing enormous sums on artificial intelligence this year alone, rather than the sector being close to achieving any scientific advancements.
Nonetheless, FLI states that the prospect of ASI being developed “in the coming decade” presents numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to security threats and even threatening humanity with extinction. Existential fears about artificial intelligence center around the potential ability of a system to escape human oversight and safety guidelines and initiate events contrary to human interests.
The institute released a American survey showing that about 75% of US citizens want strong oversight on sophisticated artificial intelligence, with 60% thinking that artificial superintelligence should not be created until it is proven safe or manageable. The poll of American respondents added that only 5% supported the status quo of rapid, uncontrolled advancement.
The leading AI companies in the US, including the conversational AI creator OpenAI and the search giant, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an explicit goal of their work. While this is slightly less advanced than superintelligence, some experts also caution it could pose an existential risk by, for instance, being able to improve itself toward reaching superintelligent levels, while also carrying an underlying danger for the modern labour market.
A climate scientist specializing in polar regions, with over a decade of field research experience in the Canadian Arctic.