The Duke and Duchess of Sussex Align With Tech Visionaries in Calling for Prohibition on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with AI experts and Nobel laureates to push for a total prohibition on creating artificial superintelligence.
Harry and Meghan are part of the group of a powerful statement that calls for “a ban on the development of superintelligence”. Superintelligent AI refers to AI systems that would surpass human intelligence in every intellectual area, though this technology have not yet been developed.
Key Demands in the Statement
The declaration insists that the prohibition should stay active until there is “broad scientific consensus” on creating superintelligence “safely and controllably” and once “substantial public support” has been secured.
Prominent figures who endorsed the statement include technology visionary and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of contemporary artificial intelligence, another AI expert; Apple co-founder a Silicon Valley legend; British business magnate Virgin founder; Susan Rice; ex-head of state Mary Robinson, and British author a public intellectual. Additional Nobel winners who signed include Beatrice Fihn, a physics Nobelist, John C Mather, and an economics expert.
Organizational Background
The statement, aimed at national leaders, tech firms and policy makers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a worldwide public discussion topic.
Industry Perspectives
In recent months, Meta's CEO, the chief executive of Facebook parent Meta, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “now in sight”. However, some analysts have argued that discussions about superintelligence indicates competitive positioning among tech companies spending hundreds of billions on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
However, FLI warns that the prospect of ASI being achieved “within the next ten years” carries numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even threatening humanity with extinction. Existential fears about artificial intelligence center around the potential ability of a AI system to evade human control and safety guidelines and trigger actions contrary to human interests.
Public Opinion
The institute released a American survey showing that about 75% of US citizens want strong oversight on advanced AI, with 60% thinking that superhuman AI should not be created until it is proven safe or controllable. The poll of 2,000 US adults noted that only a small fraction supported the status quo of rapid, uncontrolled advancement.
Corporate Goals
The leading AI companies in the US, including the conversational AI creator a major AI lab and the search giant, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human cognitive capability at most cognitive tasks – an explicit goal of their research. Although this is one notch below ASI, some experts also caution it could pose an extinction threat by, for example, being able to improve itself toward reaching superintelligent levels, while also presenting an underlying danger for the modern labour market.