Site logo

World powers tackle AI risk at UK summit

Delegates from 28 nations, including the United States and China, agreed on Wednesday to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.

The first international AI Safety Summit, held at a former codebreaking spy base near London, focused on cutting-edge “frontier” AI that some scientists warn could pose a risk to humanity’s very existence.

British Prime Minister Rishi Sunak said the declaration was “a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren”.

But US Vice-President Kamala Harris urged Britain and other countries to go further and faster, stressing the transformations AI is already bringing and the need to hold tech companies accountable — including through legislation.

In a speech at the US Embassy, Harris said the world needs to start acting now to address “the full spectrum” of AI risks, not just existential threats such as massive cyberattacks or AI-formulated bioweapons.

“There are additional threats that also demand our action, threats that are currently causing harm, and to many people also feel existential,” she said, citing a senior citizen kicked off his healthcare plan because of a faulty AI algorithm or a woman threatened by an abusive partner with deepfake photos.

The AI Safety Summit is a labour of love for Sunak, a tech-loving former banker who wants the United Kingdom to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI.

Harris is due to attend the summit today, Thursday, joining government officials from more than two dozen countries, including Canada, France, Germany, India, Japan, Saudi Arabia – and China, invited over the protests of some members of Sunak’s governing Conservative Party.

Getting the nations to sign the agreement, dubbed the Bletchley Declaration, was an achievement, even if it is light on details and does not propose a way to regulate the development of AI. The countries pledged to work towards “shared agreement and responsibility” about AI risks, and hold a series of further meetings. South Korea will hold a mini virtual AI summit in six months, followed by an in-person one in France a year from now.

China’s Vice-Minister of Science and Technology Wu Zhaohui said AI technology is “uncertain, unexplainable and lacks transparency”.

“It brings risks and challenges in ethics, safety, privacy and fairness. Its complexity is emerging,” he said, noting that Chinese President Xi Jinping last month launched the country’s Global Initiative for AI Governance.

“We call for global collaboration to share knowledge and make AI technologies available to the public under open-source terms,” he said.

Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity.

European Commission President Ursula von der Leyen, United Nations Secretary General Antonio Guterres, and executives from US artificial intelligence companies such as Anthropic, Google’s DeepMind and OpenAI and influential computer scientists like Yoshua Bengio, one of the ‘godfathers’ of AI, are also attending the meeting at Bletchley Park, a former top-secret base for World War II codebreakers that’s seen as a birthplace of modern computing.

Attendees said the closed-door meeting’s format has been fostering healthy debate. Informal networking sessions are helping to build trust, said Mustafa Suleyman, CEO of Inflection AI.

Meanwhile, at formal discussions “people have been able to make very clear statements, and that’s where you see significant disagreements, both between countries of the North and South, (and) countries that are more in favour of open source and less in favour of open source,” Suleyman told reporters.

Open-source AI systems allow researchers and experts to quickly discover problems and address them. But the downside is that once an open-source system has been released, “anybody can use it and tune it for malicious purposes”, Bengio said on the sidelines of the meeting.

“There’s this incompatibility between open source and security. So how do we deal with that?”

Only governments, not companies, can keep people safe from AI’s dangers, Sunak said last week. However, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first.

In contrast, Harris stressed the need to address the here and now. She also encouraged other countries to sign up to a US-backed pledge to stick to “responsible and ethical” use of AI for military aims.


Read More


  • No comments yet.
  • Add a comment