Europe's world-leading artificial intelligence rules are facing a do-or-die moment
Hailed as a world first, European Union artificial intelligence rules are facing a make-or-break moment as negotiators try to hammer out the final details this week — talks complicated by the sudden rise of generative AI that produces human-like work.
First suggested in 2019, the EU’s AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.
But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google'sRead More : How AI will affect sustainability and ESG integration Bard chatbot. Big tech companies are lobbying against what they see as overregulation that stifles innovation, while European lawmakers want added safeguards for the cutting-edge AI systems those companies are developing.
Meanwhile, the U.S., UK, China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.
“Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance but growing chance that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.
He said “there’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.
When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.
Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.
That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.
The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.
Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.
Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.
While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.
“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.
Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.
The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany’s Aleph Alpha.
Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media,” Reiners said.
A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existential risk” from AI.
AI is “too important not to regulate, and too important not to regulate well,” Google’s top legal officer, Kent Walker, said in a Brussels speech last week. “The race should be for the best AI regulations, not the first AI regulations."
Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.
The nature of general purpose AI systems means “you don’t know how they’re applied,” she said. At the same time, regulations are needed "because otherwise down the food chain there’s no accountability” when other companies build services with them, McGowan said.
Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.
Aleph Alpha said a “balanced approach is needed" and supported the EU's risk-based approach. But it's “not applicable” to foundation models, which need “more flexible and dynamic” regulations, the German AI company said.
EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.
EU's three branches of government are facing one of their last chances to reach a deal Wednesday.
Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.
If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.
“There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate,” Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.
His office said he wasn't available for an interview.
“It’s a very fluid conversation still," he told the event in Brussels. “We’re going to keep you guessing until the very last moment.”
First suggested in 2019, the EU’s AI Act was expected to be the world's first comprehensive AI regulations, further cementing the 27-nation bloc's position as a global trendsetter when it comes to reining in the tech industry.
But the process has been bogged down by a last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's
Meanwhile, the U.S., UK, China and global coalitions like the Group of 7 major democracies have joined the race to draw up guardrails for the rapidly developing technology, underscored by warnings from researchers and rights groups of the existential dangers that generative AI poses to humanity as well as the risks to everyday life.
“Rather than the AI Act becoming the global gold standard for AI regulation, there’s a small chance but growing chance that it won’t be agreed before the European Parliament elections” next year, said Nick Reiners, a tech policy analyst at Eurasia Group, a political risk advisory firm.
He said “there’s simply so much to nail down” at what officials are hoping is a final round of talks Wednesday. Even if they work late into the night as expected, they might have to scramble to finish in the new year, Reiners said.
When the European Commission, the EU's executive arm, unveiled the draft in 2021, it barely mentioned general purpose AI systems like chatbots. The proposal to classify AI systems by four levels of risk — from minimal to unacceptable — was essentially intended as product safety legislation.
Brussels wanted to test and certify the information used by algorithms powering AI, much like consumer safety checks on cosmetics, cars and toys.
That changed with the boom in generative AI, which sparked wonder by composing music, creating images and writing essays resembling human work. It also stoked fears that the technology could be used to launch massive cyberattacks or create new bioweapons.
The risks led EU lawmakers to beef up the AI Act by extending it to foundation models. Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet.
Foundation models give generative AI systems such as ChatGPT the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.
Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundation models, GPT-4, reinforced for some European leaders the dangers of allowing a few dominant AI companies to police themselves.
While CEO Sam Altman was fired and swiftly rehired, some board members with deep reservations about the safety risks posed by AI left, signaling that AI corporate governance could fall prey to boardroom dynamics.
“At least things are now clear” that companies like OpenAI defend their businesses and not the public interest, European Commissioner Thierry Breton told an AI conference in France days after the tumult.
Resistance to government rules for these AI systems came from an unlikely place: France, Germany and Italy. The EU's three largest economies pushed back with a position paper advocating for self-regulation.
The change of heart was seen as a move to help homegrown generative AI players such as French startup Mistral AI and Germany’s Aleph Alpha.
Behind it "is a determination not to let U.S. companies dominate the AI ecosystem like they have in previous waves of technologies such as cloud (computing), e-commerce and social media,” Reiners said.
A group of influential computer scientists published an open letter warning that weakening the AI Act this way would be “a historic failure.” Executives at Mistral, meanwhile, squabbled online with a researcher from an Elon Musk-backed nonprofit that aims to prevent “existential risk” from AI.
AI is “too important not to regulate, and too important not to regulate well,” Google’s top legal officer, Kent Walker, said in a Brussels speech last week. “The race should be for the best AI regulations, not the first AI regulations."
Foundation models, used for a wide range of tasks, are proving the thorniest issue for EU negotiators because regulating them "goes against the logic of the entire law,” which is based on risks posed by specific uses, said Iverna McGowan, director of the Europe office at the digital rights nonprofit Center for Democracy and Technology.
The nature of general purpose AI systems means “you don’t know how they’re applied,” she said. At the same time, regulations are needed "because otherwise down the food chain there’s no accountability” when other companies build services with them, McGowan said.
Altman has proposed a U.S. or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it couldn't comply with EU rules but quickly walked back those comments.
Aleph Alpha said a “balanced approach is needed" and supported the EU's risk-based approach. But it's “not applicable” to foundation models, which need “more flexible and dynamic” regulations, the German AI company said.
EU negotiators still have yet to resolve a few other controversial points, including a proposal to completely ban real-time public facial recognition. Countries want an exemption so law enforcement can use it to find missing children or terrorists, but rights groups worry that will effectively create a legal basis for surveillance.
EU's three branches of government are facing one of their last chances to reach a deal Wednesday.
Even if they do, the bloc's 705 lawmakers still must sign off on the final version. That vote needs to happen by April, before they start campaigning for EU-wide elections in June. The law wouldn't take force before a transition period, typically two years.
If they can't make it in time, the legislation would be put on hold until later next year — after new EU leaders, who might have different views on AI, take office.
“There is a good chance that it is indeed the last one, but there is equally chance that we would still need more time to negotiate,” Dragos Tudorache, a Romanian lawmaker co-leading the European Parliament's AI Act negotiations, said in a panel discussion last week.
His office said he wasn't available for an interview.
“It’s a very fluid conversation still," he told the event in Brussels. “We’re going to keep you guessing until the very last moment.”
Source: japantoday.com