The United Nations Security Council has been told to “take AI seriously” as it met for the first time to discuss the risks and opportunities of artificial intelligence.
In the first-ever session of its kind at the world’s top diplomatic body, representatives from the 15 member countries heard how AI poses “catastrophic risks for humans” but also a “historic opportunity”.
Chaired by Foreign Secretary James Cleverly, the chamber heard from Jack Clark, co-founder of leading AI company Anthropic, and Professor Zeng Yi, co-director of the China-UK Research Center for AI Ethics and Governance.
“No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,” Mr Cleverly said.
“Our shared goal will be to consider the risks of AI and decide how they can be reduced through coordinated action.”
The Security Council members heard from two experts who outlined what they believe to be the immense opportunity OF AI but also the serious risks it poses and the urgent need for global unity on the issue.
“We cannot leave the development of artificial intelligence solely to private sector actors,” Mr Clark said. “Governments of the world must come together, develop state capacity, and make further development of powerful AI systems.”
He explained: “An AI system that can help us in understanding the science of biology may also be an AI system that can be used to construct biological weapons.”
He warned of the risks of not understanding the technology: “It is as though we are building engines without understanding the science of combustion.
“This means that once AI systems are developed and deployed, people identify new uses for them, unanticipated by their developers, many of these will be positive, but some could be misuses.”
Please use Chrome browser for a more accessible video player
Professor Zeng Yi, director of the Brain-inspired Cognitive Intelligence Lab, called on the UN as a body to take the subject seriously.
“The United Nations must play a central role to set up a framework on AI for development and governance to ensure global peace and security,” the professor said.
He warned: “AI risks human extinctions simply because we haven’t found a way to protect ourselves from AI’s utilisation on human weaknesses.”
Mr Clark added: “Even more challenging is the problem of chaotic or unpredictable behaviour. An AI system may, once deployed, exhibit subtle problems, which were not identified during its development.
“I would challenge those listening to this speech to not think of AI as a specific technology, but instead as a type of human labour that can be bought and sold at the speed of a computer and which is getting cheaper and more capable over time.”
Read more:
Analysis: Transparency is crucial over how AI is trained – and regulators must take the lead
Tony Blair: Impact of AI on par with Industrial Revolution
Artificial intelligence ‘doesn’t have capability to take over’, Microsoft boss says
Please use Chrome browser for a more accessible video player
‘Huge questions’ over AI
The two witnesses raised specific open questions which they said needed urgent answers. Who should have access to the power of AI? How should governments regulate this power? Which actors should be able to create and sell these so-called AI experts? And what kinds of experts can we allow to be created?
“These are huge questions,” Mr Clark said. “Humans go through rigorous evaluation and on-the-job testing for many critical roles.”
Professor Zeng Yi added: “It is very funny, misleading and irresponsible that dialogue systems powered by generative AI always argue ‘I think, I suggest’.”
“Well,” he said, “there are no ‘I’ or ‘me’ in the AI models. AI should never ever pretend to be human, take human positions or mislead humans to have the wrong perception. We should use generative AI to assist but never trust them to replace human decision-making.”
Please use Chrome browser for a more accessible video player
UK to host global AI summit
The UK is seeking a lead global role in driving forward international consensus on how to manage the AI risks with the opportunities. Last month, Prime Minister Rishi Sunak announced that the UK will host the first major global summit on AI safety in the autumn.
Be the first to get Breaking News
Install the Sky News app for free
As permanent members of the Security Council, China welcomed the gathering.
In brief comments to Sky News before the meeting, Ambassador Zhang Jun said he “welcomed the meeting” which would “help to add understanding to the issue”.
Closing the session, Mr Cleverly said: “Let us work together to ensure peace and security as we pass across the threshold of an unfamiliar world.”