Politics

Expert warns UN’s role in AI regulation could lead to safety overreach

The United Nations (U.N.) advisory body on artificial intelligence (AI) last week issued seven recommendations to address AI-related risks, but an expert told Fox News Digital the points do not cover critical areas of concern. 

‘They didn’t really say much about the unique role of AI in different parts of the world, and I think they needed to be a little more aware that different economic structures and different regulatory structures that already exist are going to cause different outcomes,’ Phil Siegel, co-founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), said. 

‘I think that they could have done a better job of — instead of just trying to go to the lowest common denominator — being a little more specific around what does a state like the United States, what is unique there?’ Siegel said. ‘How does what we do in the United States impact others, and what should we be looking at specifically for us?

‘Same thing with Europe. They have much more strict privacy needs or rules in Europe,’ he noted. ‘What does that mean? I think it would have gained them a little bit of credibility to be a little more specific around the differences that our environments around the world cause for AI.’ 

The U.N. Secretary-General’s High-level Advisory Body on AI published its suggested guidelines Sept. 19, which aimed to cover ‘global AI governance gaps’ among its 193 member states. 

The body suggested establishing an International Scientific Panel on AI, creating a policy dialogue on AI governance, creating a global AI capacity development network, establishing a global AI fund, fostering of an AI data framework and forming an AI office in the U.N. Secretariat. 

These measures, Siegel said, seem to be an effort by the U.N. to establish ‘a little bit more than a seat at the table, maybe a better seat at the table in some other areas.’ 

‘If you want to take it at face value, I think what they’re doing is saying some of these recommendations that different member states have come up with have been good, especially in the European Union, since they match a lot of those,’ Siegel noted. 

‘I think … it sets the bar in the right direction or the pointer in the right direction that people need to start paying attention to these things and letting it get off the rails, but I think some of it is just it’s not really doable.’ 

Multiple entities have pursued global-level coordination on AI policy as nations seek to maintain an advantage while preventing rivals from developing into pacing challenges. While trying to develop AI for every possible use, they also hold safety summits to try and ‘align’ policy, such as the upcoming U.S.-led summit in California in November. 

Siegel acknowledged the U.N. is likely to be one of the better options to help coordinate such efforts as an already-existing global forum — even as countries try to set up their own safety institutes to coordinate safety guidelines between nations. But he remained concerned about U.N. overreach. 

‘They probably should be coordinated through the U.N., but not with rules and kind of hard and fast things that the member states have to do, but a way of implementing best practices,’ Siegel suggested. 

‘I think there’s a little bit of a trust issue with the United Nations given they have tried to, as I said, gain a little bit more than a seat at the table in some other areas and gotten slapped back. On the other hand, you know, it already exists.

‘It is something that the vast majority of countries around the world are members, so it would seem to me to be the logical coordinating agency, but not necessarily for convening or measurements and benchmarks.’ 

Siegel said the U.S. and Europe have already made ‘some pretty good strides’ on creating long-term safety regulations, and Asian nations have ‘done a good job on their own and need to be brought into these discussions.’ 

‘I just don’t know if the U.N. is the right place to convene to make that happen, or is it better for them to wait for these things to happen and say, ‘We’re going to help track and be there to help’ rather than trying to make them happen,’ Siegel said.  

Reuters contributed to this report. 

This post appeared first on FOX NEWS