• AIPressRoom
  • Posts
  • Why does the EU AI Act hang in the balance? The OpenAI drama offers clues

Why does the EU AI Act hang in the balance? The OpenAI drama offers clues

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

The EU AI Act, which has been poised to become landmark comprehensive AI legislation, is currently hanging in the balance due to squabbling around regulation of ‘foundation’ models, or AI models trained on a massive scale like GPT-4, Claude and Llama.

The French, German, and Italian governments recently advocated for limited regulation of foundation models, which many say is a result of intense lobbying by Big Tech as well as open source companies like Mistral, which is advised by Cédric O, a former digital minister for the French government. Some have called this a “power grab” that would gut the EU AI Act.

Meanwhile, those in favor of including foundation models in the EU AI Act regulations have pushed back: Just yesterday, a “group of German and international experts in the field of AI and leaders in business, civil society, and academia” published a new open letter calling on the German government not to exempt foundation models from the EU AI Act, which “would hurt public safety and European businesses.” Signatories include prominent AI researchers including Geoffrey Hinton and Yoshua Bengio, who have expressed concern over existential AI risks in recent months, as well as AI critic Gary Marcus.

In addition, French experts including Bengio also published a joint op-ed in today’s Le Monde, speaking out “against ongoing attempts by Big Tech to gut this landmark piece of legislation in its final phase,” according to a Future of Life Institute spokesperson.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

Learn More 

Why is the legislation reaching this roadblock on its final stretch? After all, two and a half years after draft rules were proposed and many months after negotiations began, the EU AI Act, which is focused on high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products, has been in its final stage of negotiations called the trilogue, when EU lawmakers and member states negotiate the final details of the bill. The European Commission has harbored hopes to vote on the AI Act by the end of 2023, before any political impacts of the 2024 European Parliament elections. 

The recent OpenAI drama offers some clues for the EU AI Act

Believe it or not, the recent drama around OpenAI — in which CEO Sam Altman was fired by the company’s nonprofit board, only to be reinstated five days later after two members of the board were removed and replaced — offers some clues to what’s going on in the EU.

Just like at OpenAI, the EU AI Act debates are between those focused either on the commercial profit potential of AI and/or the consequences of reducing open innovation opportunities, and those with strong belief systems around x-risk, or the possible existential risks of AI.

From what we have gleaned from OpenAI, CEO Sam Altman and president Greg Brockman — who were both on the company’s nonprofit board — were on the side of pushing for commercial profit opportunities in order to fund the company’s mission of developing artificial general intelligence, or AGI. But three other non-employee members of the board — Adam D’Angelo, Tasha McCauley, and Helen Toner — were more concerned about AI ‘safety,’ and were willing to shut the company down before allowing the potential for what they considered high-risk, AGI-like technology to be released. After getting chief scientist Ilya Sutskever on board (pun intended), Altman was ousted.

The three non-employee members had ties to the Effective Altruism movement — and that winds back to the lobbying around the EU AI Act. Max Tegmark, president of the Future of Life Institute, has his own ties to Effective Altruism. The Wall Street Journal reported last week that the Effective Altruism community has “spent vast sums promoting the idea that AI poses an existential risk.”

Big Tech and EU AI Act regulation

But Big Tech, including OpenAI, has done plenty of lobbying too: Let’s not forget that OpenAI returning-CEO-hero Sam Altman had offered mixed messages on AI regulation for months — particularly in the EU. Back in June, a TIME investigation found that while Altman had spent weeks touring world cities and speaking out on the need for global AI regulation, behind the scenes OpenAI had lobbied for “significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.”

Gary Marcus recently pointed to the OpenAI drama as a reason for EU regulators to make sure that Big Tech doesn’t have the opportunity to self-regulate.

As a signatory to yet another open letter from last week offering support for the tiered approach supported by the European Parliament to manage risks associated with foundation models in the EU AI Act, he posted on X: “The chaos at OpenAI only serves to highlight the obvious: we can’t trust big tech to self-regulate. Reducing critical parts of the EU AI Act to an exercise in self-regulation would have devastating consequences for the world.”

And Brando Benifei, one of two European Parliament lawmakers leading negotiations on the laws, told Reuters last week: “The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders.”

Is the EU AI Act in real danger?

It remains to be seen whether the EU AI Act is in real danger, according to German consultant Benedikt Kohn, who wrote in an analysis yesterday that while further negotiations are ongoing, “an agreement should be reached as soon as possible, as time is pressing.” That’s because the next – and, according to the original plan, final – trilogue will take place on December 6, after which the Spanish Council Presidency only has a month left before Belgium takes over the presidency in January 2024. “Under Belgian leadership, there would then be particular pressure to reach an agreement,” Kohn wrote, because the European elections in June 2024 will result in a new Parliament.

A failure of the EU AI Act, he continued, “would probably be a bitter blow for everyone involved, as the EU has long seen itself as a global pioneer with its plans to regulate artificial intelligence.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.