Key Points
WASHINGTON, Nov 27 (Reuters) - The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design."..
In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse...
"This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly told Reuters, saying the guidelines represent "an agreement that the most important thing that needs to be done at the design phase is security."..
In addition to the United States and Britain, the 18 countries that signed on to the new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore...
France, Germany and Italy also recently reached an agreement on how artificial intelligence should be regulated that supports "mandatory self-regulation through codes of conduct" for so-called foundation models of AI, which are designed to produce a broad range of outputs...
You might be interested in
Artificial intelligence and cybersecurity: A double-edged sword
31, Oct, 23This article is authored by Lt. General Iqbal Singh Singha, director, global and government affairs, TAC Security.
AI explosion merits regulation to rein in threats, experts say
13, Jul, 23Rapid advancements in artificial intelligence have the potential to exacerbate societal problems and even pose an existential threat to human life, increasing the need for global regulation, AI experts told the Reuters MOMENTUM conference this week.
AI Giants Pledge to Allow External Probes of Their Algorithms, Under a New White House Pact
21, Jul, 23Leading AI developers including Google and OpenAI promised the Biden administration to check for problems such as biased output. The agreement is not legally binding.