Global declarations about AI regulation may need a hubris check

Posted on:
Key Points

While their upping the ante to rein in tech companies that are building and releasing AI and generative AI (GenAI) models at a frenetic pace is credible, since it pressures them to develop responsible AI models, using terms like world leader in AI safety" also smacks of one-upmanship in AI-related geopolitics..

On 30 October, the US government said President Joe Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of AI.".

On 1 November, the UK government followed by announcing at its just-concluded AI Summit at Bletchley Park that leading AI nations" have reached a world-first agreement" on the opportunities and risks posed by frontier AI" (jargon for big foundational models like GPT-4)..

Such moves could add to the bureaucracy of decision-making. Further, the US-centric order aims at protecting the privacy and security of the US government, its agencies and citizens, but it is not clear what it means for enterprises around the world, including here, that have begun building solutions based on application programming interfaces (APIs) provided by foundation AI models and large language models (LLMs) built by US-based companies..

Meanwhile, while misinformation, AIs impact on jobs, AI weaponization and safety remain key concerns for policymakers, targeting just the big or so-called frontier AI models miss an important pointthat an AI models size no longer defines its utility, or even capability for that matter..

You might be interested in

OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects

27, Jan, 24

The Biden administration is using the Defense Production Act to require companies to inform the Commerce Department when they start training high-powered AI algorithms.

Biden’s AI Order Is Government’s Bid for Dominance

08, Nov, 23

His executive action will help the giants but slow down innovation.