Investing

European Countries Race To Set The AI Regulatory Pace


As artificial intelligence continues its meteoric rise, European nations are moving swiftly to establish regulatory frameworks and ramp up investments in the technology. With generative AI like ChatGPT capturing the public’s imagination, regulators worry about potential risks even as they aim to promote innovation. Europe seeks to chart a middle course between an AI “wild west” and top-down state control.

Spain recently announced the creation of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which will be the first AI regulatory body in the European Union.

Headed by a multidisciplinary team of technology experts, lawyers, and humanities scholars, AESIA has a broad mandate to monitor and assess the impacts of AI on Spanish society. The agency will create risk assessment protocols, audit algorithms and data practices, and establish binding rules companies must follow for the development and deployment of AI systems.

Meanwhile, Germany unveiled an extensive AI Action Plan that will boost investments in AI research and skills training. The Federal Ministry of Education and Research (BMBF) will invest over 1.6 billion euros in AI during this legislative period, with funding ramping up from 17.4 million euros in 2017 to a planned 483 million euros in 2024.

Research Minister Bettina Stark-Watzinger framed the initiative as strengthening Europe’s “technological sovereignty.” The plan promises 150 new AI professorships, six skills centers, and expanded supercomputing infrastructure over the next two years.

However, German companies warned that previous strategies to transfer AI from academia to industry have fallen short.

For example, German association Bitkom, representing more than 2,200 companies of the digital economy, noted that despite a 2018 national AI plan, implementation lagged. They contend only 15% of German companies currently use AI, evidencing weak tech transfer. While applauding the new investments, the association worries the EU’s AI Act could inflate costs and create legal uncertainty around AI systems.

The UK is also being urged by a parliamentary committee to quicken the pace of its AI governance efforts, given the progress in the EU and US on establishing guardrails. In March, the UK published an AI governance white paper centered on principles like transparency and fairness. However, the committee found this approach risks falling behind other countries’ regulatory pushes.

In a new interim report, “The governance of artificial intelligence“, the Science and Technology Committee of the House of Commons argues the country still lacks robust policies to harness AI’s benefits while guarding against potential harms.

The report warns AI risks exacerbating problems like bias, privacy violations and unemployment if deployed without oversight. It proposes guardrails to prevent foreseeable issues without stifling innovation.

With the UK hosting an international AI summit in November, the committee said the nation must take decisive action rather than playing catch up. And it needs to do it fast.

“Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives— other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer,” MPs wrote.

With the EU finalizing its pioneering AI Act, member states like Spain and Germany launching regulatory bodies and funding research, and the UK debating comprehensive policies, Europe aims to steer AI’s trajectory. Striking the right balance between safety and innovation remains complex. But one thing is clear: Europe is determined to drive the global conversation

Follow me on Twitter or LinkedInCheck out my website





Source link

Leave a Response