Uncategorized

China, U.S. and EU sign milestone declaration to teamwork in AI safety

[ad_1]

China's vice minister of science and technology Wu Zhaohui (R), speaks at the AI Safety Summit at Bletchley Park, and U.S. Secretary of Commerce Gina Raimondo (L) and British Secretary of State for Science Innovation and Technology Michelle Donelan (C) listen to Wu's speech in Bletchley, Britain, November 1, 2023. /CFP

China’s vice minister of science and technology Wu Zhaohui (R), speaks at the AI Safety Summit at Bletchley Park, and U.S. Secretary of Commerce Gina Raimondo (L) and British Secretary of State for Science Innovation and Technology Michelle Donelan (C) listen to Wu’s speech in Bletchley, Britain, November 1, 2023. /CFP

China’s vice minister of science and technology Wu Zhaohui (R), speaks at the AI Safety Summit at Bletchley Park, and U.S. Secretary of Commerce Gina Raimondo (L) and British Secretary of State for Science Innovation and Technology Michelle Donelan (C) listen to Wu’s speech in Bletchley, Britain, November 1, 2023. /CFP

China agreed to work with the U.S., the European Union (EU) and other countries to collectively manage the risk from artificial intelligence (AI) at the world’s first AI Safety Summit held in Britain on Wednesday.

The “Bletchley Declaration” was published by Britain on the opening day of the summit hosted at Bletchley Park, central England. It was agreed upon by 28 countries and the EU, with the aim of boosting global efforts to cooperate on AI safety.

Britain said in a separate statement accompanying the declaration that “the declaration fulfills key summit objectives in establishing a shared agreement and responsibility on the risks, opportunities, and a forward process for international collaboration on frontier AI safety and research, particularly through greater scientific collaboration.”

It encourages transparency and accountability from actors developing frontier AI technology regarding their plans to measure, monitor, and mitigate potentially harmful capabilities.

British Prime Minister Rishi Sunak said, “This is a landmark achievement that sees the world’s greatest AI powers agreeing on the urgency behind understanding the risks of AI, helping ensure the long-term future of our children and grandchildren.”

The declaration sets out a two-pronged agenda focused on identifying risks of shared concern and building a scientific understanding of them, as well as building cross-country policies to mitigate them.

“This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research,” according to the declaration.

A view of the Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

A view of the Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

A view of the Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

Why now?

Fears about the impact AI could have on economies and society escalated last November when Microsoft-backed OpenAI made ChatGPT available to the public.

Using natural language processing tools to create human-like dialogue, it fueled concerns, including among some AI pioneers, that machines could eventually attain greater intelligence than humans, leading to unlimited, unintended consequences.

Governments and officials are now striving to chart a way forward in collaboration with AI companies, which fear being burdened by regulation before the technology reaches its full potential.

Some tech executives and political leaders have warned that the rapid development of AI poses an existential threat to the world if not controlled, igniting a race among governments and international institutions to design safeguards and regulations.

British Secretary of State for Science, Innovation, and Technology, Michelle Donelan, stated that the summit was an achievement in gathering so many key players in one room.

“For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI,” said Donelan.

Just as tech companies compete for dominance in AI, governments are vying to lead the way in regulation.

South Korea will host the next global AI Safety Summit in six months’ time, according to Donelan, and she added that the third gathering will be hosted by France in one year’s time.

China's vice minister of science and technology Wu Zhaohui speaks at the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

China’s vice minister of science and technology Wu Zhaohui speaks at the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

China’s vice minister of science and technology Wu Zhaohui speaks at the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

China, a key participant

China is a key participant at this year’s summit, given the country’s pivotal role in the development of AI.

In a first for Western efforts to manage its safe development, a Chinese vice minister joined U.S. and EU leaders and tech bosses such as Elon Musk and ChatGPT’s Sam Altman at Bletchley Park, home of Britain’s World War Two code-breakers.

Wu Zhaohui, China’s vice minister of science and technology, told the opening session of the two-day summit that China was ready to increase collaboration on AI safety to help build an international “governance framework”. 

The Chinese delegation promoted China’s Global Artificial Intelligence Governance Initiative launched at the third Belt and Road Forum for International Cooperation held in Beijing on October 18, and will carry out bilateral talks with relevant countries.

The delegation underlined that the summit provides an important platform for dialogue, and opportunities for exchange and cooperation among countries on AI safety and international governance issues.

“China is willing to enhance our dialogue and communication in AI safety with all sides, contributing to an international mechanism with global participation in governance framework,” said Wu Zhaohui.

All nations have the right to develop and use AI technology, Wu said. “We uphold the principles of mutual respect, equality and mutual benefits. Countries regardless of their size and scale have equal rights to develop and use AI.” 

“We call for global cooperation to share AI knowledge and make AI technologies available to the public on open source terms,” he added. 

U.S. Vice President Kamala Harris delivers a speech at the U.S. Embassy in London, Britain, November 1, 2023. /CFP

U.S. Vice President Kamala Harris delivers a speech at the U.S. Embassy in London, Britain, November 1, 2023. /CFP

U.S. Vice President Kamala Harris delivers a speech at the U.S. Embassy in London, Britain, November 1, 2023. /CFP

U.S. urges ‘full spectrum’ action of AI risks

U.S. Vice President Kamala Harris on Wednesday also called for urgent action to protect the public and democracy from the dangers posed by AI, announcing a series of initiatives to address safety concerns about the technology.

In a speech at the U.S. Embassy in London, Harris spoke of the dangers AI could pose for individuals and the Western political system.

The technology has the potential to create “cyberattacks at a scale beyond anything we have seen before” or “AI-formulated bioweapons that could endanger the lives of millions,” she said.

“These threats are often referred to as the ‘existential threats of AI’ because they could endanger the very existence of humanity,” Harris added.

On the same day, the U.S. Secretary of Commerce Gina Raimondo said the country will launch an AI safety institute to evaluate known and emerging risks of so-called “frontier” AI models.

The new institute will share information and collaborate on research with peer institutions internationally, including Britain’s planned AI Safety Institute.

Moreover, the billionaire entrepreneur Elon Musk suggested establishing a “third-party referee” that could oversee companies developing AI and sound the alarm if they have concerns.

“What we’re really aiming for here is to establish a framework for insight so that there’s at least a third-party referee, an independent referee, that can observe what leading AI companies are doing and at least sound the alarm if they have concerns,” said Musk.

Britain's King Charles III addresses delegates in a pre-recorded video message during the first plenary session of the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

Britain’s King Charles III addresses delegates in a pre-recorded video message during the first plenary session of the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

Britain’s King Charles III addresses delegates in a pre-recorded video message during the first plenary session of the AI Safety Summit at Bletchley Park in Bletchley, Britain, November 1, 2023. /CFP

Britain to invest in AI supercomputing

Britain’s King Charles said on Wednesday the international community must address the risks of AI with a sense of urgency and unity, as it did with climate change.

“That is how the international community has sought to tackle climate change, to light a path to net zero, and safeguard the future of our planet,” King Charles said in a video message played at the summit.

“We must similarly address the risks presented by AI with a sense of urgency, unity and collective strength,” he said.

In addition, Britain said on Wednesday it would boost funding for two supercomputers which will support research into making advanced artificial intelligence models safe.

Funding for the “AI Research Resource” will be increased to 300 million pounds (around $363.57 million) from a previously announced 100 million pounds, the government said at an AI safety summit aimed at charting a safe way forward for the rapidly evolving technology.

“Frontier AI models are becoming exponentially more powerful,” British Prime Minister Rishi Sunak said on social media platform X.

“This investment will make sure Britain’s scientific talent have the tools they need to make the most advanced models of AI safe.”

Britain said two new supercomputers, one based in Cambridge and one in Bristol, would give researchers access to resources with more than thirty times the capacity of Britain’s current largest public AI computing tools.

The machines, which will be running from the summer of next year, will be used to analyze advanced AI models to test safety features, as well as to drive breakthroughs in drug discovery and clean energy, the government said.

(With input from agencies)

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button