Banking

Europe’s rushed attempt to set the rules for AI


Andreas Cleve has lots on his mind as chief executive of Danish healthcare start-up Corti — wooing new investors, convincing clinicians to use his company’s “AI co-pilot” and keeping up with the latest breakthroughs in generative artificial intelligence.

But he fears that efforts like these will be made harder by a new concern: the EU’s new Artificial Intelligence Act, a first-of-its-kind law aimed at ensuring ethical use of the technology. Many tech start-ups are concerned that the well-intentioned legislation might end up smothering the emerging industry in red tape.

The costs of compliance — which European officials admit could run into six-figure sums for a company with 50 employees — amount to an extra “tax” on the bloc’s small enterprises, Cleve says.

“I worry about legislation that becomes hard to bear for a small company that can’t afford it,” he says. “It’s a daunting task to raise cash and now you’ve had this tax imposed. You also need to spend time to understand it.”

Cleve still welcomes regulation of AI, because he thinks that safeguards around products that may cause harm is very important. “The AI Act is a good idea but I worry that it will make it very hard for deep tech entrepreneurs to find success in Europe.”

The act, which formally comes into force in August and will be implemented in stages over the next two years, is the first piece of legislation of its kind, emerging from the EU’s desire to become the “global hub for trustworthy AI”.

Timeline of enforcement

Aug 2024

The AI Act formally enters into force, kicking off the timeline for various prohibitions and obligations enshrined in the law

Feb 2025

The prohibitions on “unacceptable risk” AI kick in, such as systems that aim to manipulate or deceive people in order to change their behaviour, or seek to evaluate or classify people by “social scoring”

Aug 2025

A range of obligations go into effect on the providers of so-called “general purpose AI” models that underpin generative AI tools like ChatGPT or Google Gemini

Aug 2026

Rules now apply on “high risk” AI systems, including on biometrics, critical infrastructure, education, and employment

It sorts different AI systems into categories of risk. Those with “minimal risk” — including applications like spam filters — will be unregulated. “Limited risk” systems, such as chatbots, will have to submit to certain transparency obligations. The most onerous regulations will be on providers of systems classified as “high risk,” which might for example profile individuals or process personal data.

The rules include more transparency on how they use data, the quality of the data sets they use to train models, clear information to users and robust human oversight. Medical devices and critical infrastructure fall within this category.

The AI legislation is intended to help new technology to flourish, EU officials say, with clear rules of the game. They stem from the dangers the EU executive sees in the interaction between humans and AI systems, including rising risks to safety and security of EU citizens, and potential job losses.

The push to regulate also arose out of concerns that public mistrust in AI products would ultimately lead to a slowdown in the development of the technology in Europe, leaving the bloc behind superpowers like the US and China.

But the rules are also an early attempt to steer the global process of regulating the technology of the future, as the US, China and the UK also work on crafting regulatory frameworks for AI. Unveiling the act in April 2021, the bloc’s digital chief, Margrethe Vestager, said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”

The commission’s work was upended in late 2022 when OpenAI released ChatGPT, a chatbot powered by large language models with the ability to generate coherent text from a simple prompt. The emergence of so-called generative AI reshaped the tech landscape, and had EU parliamentarians rushing to rewrite the rules to take into account the new development.

Margrethe Vestager gestures with both hands while standing behind a podium and speaking
EU digital chief, Margrethe Vestager, said the rules were aimed at ’spearheading the development of new global norms to make sure AI can be trusted’ © Geert Vanden Wijngaert/AP

But critics warned that hasty attempts to regulate foundation models — the pre-trained AI systems that underpin apps like ChatGPT, with a broad range of uses — would curb the use of the technology itself, rather than the risks posed by the uses of AI more generally.

Legislators held marathon talks in December 2023 to get the rules over the line, but critics now say they are undercooked. Regulators left out essential details urgently needed to give clarity to businesses seeking to comply with the regulations, they say ⎯ from clear rules on intellectual property rights to a code of practice for businesses. Some estimate that the EU needs somewhere between 60 or 70 pieces of secondary legislation to support the implementation of the AI Act.

“The law is rather vague,” concedes Kai Zenner, a parliamentary aide involved in drafting the rules. “Time pressure led to an outcome where many things remain open. [Regulators] couldn’t agree on them and it was easier to compromise. It was a shot in the dark.”

This scattergun approach has resulted in poorly-conceived regulations that will hinder Europe’s attempts to compete with the US in producing the AI companies of the future, warns Cecilia Bonefeld-Dahl, director-general for DigitalEurope, which represents the continent’s technology sector.

“Extra cost of compliance on EU companies is bringing us further down,” she says. “We will be hiring lawyers while the rest of the world is hiring coders.”


Officials are now frantically trying to plug the holes in the regulation before it comes into force.

One issue the current text lacks clarity on is whether systems like ChatGPT are acting illegally when they “learn” from sources protected by copyright law.

“What is fair remuneration [for content creators]? What information is protected if it was partly generated by AI? We don’t have answers to these questions,” says a veteran EU official.

Diplomats in Brussels are now attempting to find answers through consultations with member states. A confidential document, issued by the previous presidency of the EU held by Belgium, asked member states for “relevant surveys, studies or research” on the relationship between AI and copyright, along with evidence of local laws dealing with the issue.

Belgium sought views on who bears responsibility for content generated by AI and whether a “remuneration scheme” should be set up for those who create the content that AI draws upon.

The veteran official suggests the bloc’s long-standing copyright rules could be amended to tackle these pending issues. But others are reluctant to reopen old laws.

Higher risk, tougher rules

The AI Act classifies different types of artificial intelligence by the risks they pose

Minimal risk: This category, including applications like AI-enabled video games or spam filters, is unregulated.

Limited risk: Chatbots and other systems that generate text and images fall into this category, which will be subject to “light regulation” — for example, obligations to make human users aware they are interacting with a machine, or labelling content as artificially generated in certain circumstances.

High risk: These include systems to be used by law enforcement, or that perform biometric identification, or emotion recognition, or permit access to public services and benefits, or that are used in critical infrastructure.

Unacceptable risk: These prohibited AI systems might deceive or manipulate to distort human behaviour; evaluate people based on social behaviour or personal traits; or profile people as potential criminals.

Additional legislation is also required to set up of codes of practice, which will give guidance to tech companies on how to implement the rules in the AI Act, which so far lack workable detail.

An application like facial recognition, for example, requires testing under the requirements of the act by being exposed to vulnerabilities, such as changing a few pixels to see if it still recognises a face. But the AI Act contains no clear guidelines on how such a test should be performed.

The AI Office, a new division within the European Commission, will play a key role in drafting secondary laws setting out how the principles in the primary legislation should be applied in practice.

But time is running out as the codes of practice need to be in place nine months from when the AI Act enters into force. In February next year, some of its key prohibitions are due to kick in. These include bans on “unacceptable risks” — including social scoring, which rates people based on their behaviour; predictive policing, which uses data to anticipate crime; and checking workers’ moods at work, potentially invading their privacy.

“The devil will be in the details,” says a diplomat who took a leading role in drafting the AI Act. “But people are tired and the timeline is tight.”

Another risk is that the process is hijacked by lobbying from powerful business groups seeking not to clarify the rules, but to water them down. A senior EU official says lobbyists are already going around “scaremongering” among those with influence in the rulemaking process.

People standing behind three podiums on a stage at the European Commission
European legislators talk to the press after marathon talks in December 2023 to finalise AI regulations after the launch of ChatGPT the month before upended their plans © Union Europeenne/Hans Lucas/Reuters

“It’s a bit similar — though not exactly the same — as the tactics large online platforms like YouTube used when the privacy rules were approved,” the senior official says. “They cried about the end of the internet, the end of everything. Nothing happened.”

Brando Benifei, an Italian centre-left lawmaker who co-led the discussions in the European parliament, says a range of stakeholders must be involved to avoid this outcome.

“We want civil society to be involved in the drafting of the so-called Codes of Practice that the commission will have to prepare for the rules applying to the most powerful large language models,” he says.

Writing sufficiently clear rules is one challenge, but another is enforcing them in individual member states. The AI Act does not specify clearly which agency at a national level should police the rules.

An official with a lead role in the implementation of the AI Act anticipates a fight between local telecoms, and competition and data protection watchdogs over who gets to wield the stick. “It could get messy,” the person says. “There is a disparity of views over who should be the enforcer. But we need coherence on the implementation.”

Without more clarity, officials warned of a “patchy” implementation of the regulation that would trigger confusion among businesses as they roll out products in different countries.

The creation of an AI Office will help fill in the details, but it is not yet fully staffed. Brussels needs to fill 140 full-time positions, including technical staff, but also policy experts that are hard to come by. The AI Office, for example, will need a lead scientist.

Some say the EU will struggle to hire these kinds of technical experts as big tech companies are also on the lookout to recruit talented people and often offer higher salaries. “Brussels will find it easy to recruit bureaucrats,” says one EU official. “But when it comes to getting coders, we will struggle.”

Even if the commission succeeds in drawing in the right talent and the required number of people, it will take time for them to be recruited because the hiring process is known to be long and bureaucratic, says the European parliament’s Zenner.

But others play down any imminent shortage of talent. “We are getting excellent CVs,” says a person in charge of recruiting people to the AI Office. “We’ve filled about 40 to 50 positions and I don’t anticipate a shortage. We attract people who want to do good work and have the right skills.”


Complicating the EU’s efforts is the fact that different blocs — from the OECD to the G7 and the US — are pushing their own agendas when it comes to introducing safeguards on AI technology.

In the past, the European Commission’s regulators have moved early in order to influence the way regulations are enacted across the world — the so-called “Brussels effect”. Its privacy rules, for example, have now been emulated by many different jurisdictions.

But on AI, it isn’t even the only rulemaker in Europe. The Council of Europe, a pan-European body dedicated to protecting human rights, adopted in May the first legally binding international AI treaty focused on protecting human rights, rule of law and democracy.

Contrary to the AI Act, which concerns the safety of consumers when using AI, the Council of Europe treaty is concerned with making AI compatible with the values of democratic societies that respect human rights.

Hanne Juncher, director of security, integrity and rule of law at the Council of Europe, says that its treaty and the AI Act can coexist. “They are not competing and there is a need for both.”

Cecilia Bonefeld-Dahl speaks into a microphone in the European Parliament
DigitalEurope’s Cecilia Bonefeld-Dahl says Brussels needs to look at investment in AI systems and people if it wants to make an impact on the AI race © Union Europeenne/Hans Lucas/Reuters

Other blocs are also seeking to influence the use of AI. The G7 countries endorsed principles that focus on transparency in AI systems, while the has US produced its own initiative with research and development at its heart. In the UK, the Labour government is expected to set out an AI bill in the King’s Speech this week.

Officials in the EU reject the notion that its rules are competing with other efforts to set standards on AI. “To certify a medical device, for example, you need standards. There is no one single regulatory agency in control. Safety procedures will involve the co-operation of others. AI is everywhere and there cannot be a super regulator,” says another senior EU official.

New competing rules or not, many think the EU legislation on AI conflicts with the wider ambition for homegrown tech companies to compete with the US on AI — turning the Brussels effect into a hindrance.

DigitalEurope’s Bonefeld-Dahl warns that Brussels needs to look at investment in AI systems and people if it wants to make an impact on the AI race. “All our members are investing in the US and in marketing their products there. It’s not just about the AI Act. Tech leadership requires skills and we have a huge investment gap in Europe.”

Officials are actively pushing back at notions that Brussels is lagging behind. “The EU can still find its place in the AI race,” says one official.

Others have gone on a “myth busting exercise” over what the regulation actually entails. A third senior EU official with intimate knowledge of the rules says it’s “fake news” that they will damage innovation.

“The AI Act is not about everything under the sun,” this official says. “It’s not about innovation. It excludes research and development, internal company development of new technologies and any system that is not high risk.”

Roberto Viola, who heads the EU’s digital unit tasked with setting policies related to digital skills and innovation, says the AI Act will actually help start-ups by enabling innovation. “It allows for real world testing. There’s ample space for experimentation,” he says.

Still, entrepreneurs are doubtful about the EU’s ability to become a superpower in AI while implementing the new rules. “European companies are under-resourced [and limited] because [Brussels] has decided that Europe will be the hardest place to navigate as an AI company,” says Cleve, the entrepreneur.

“We need less barbed wire, or the transition could cost us dearly.”



Source link

Leave a Response