Mortgages

EU Paves The Way For U.S. In The Regulation Of A.I. – New Technology


As concerns about Artificial Intelligence (AI) continue to swell
worldwide, the European Union (EU) is providing a regulatory
roadmap for the international community. On May 11, 2023, the
European Parliament’s Committee on Civil Liberties, Justice and
Home Affairs and Committee on Internal Market and Consumer
Protection voted to approve the Artificial Intelligence Act. As written, the
Act represents a global first in the approach to the legal risk
management of AI that the U.S. and other nations will surely
consider as AI undergoes rapid evolution. The Act sets a tone for
the inevitable alignment between the U.S. and EU, as the desire for
transatlantic collaboration, regulatory oversight, appropriate
industry standards, and facilitation of economic partnerships
continue to remain a priority. While the AI Act has been approved
by lawmakers in the European Parliament, the legislation must still
go through additional steps before it becomes law.

Fundamental Principals of the EU AI Act

With constantly emerging capabilities for AI in the composition
of music, creation of literature, and provisions of health
services, the proposed Act provides critical principles for
ensuring human oversight, safety, transparency, traceability,
non-discrimination, and environmental friendliness in AI systems.
It seeks to set a universal definition for AI that remains
technology-neutral, accommodating both existing and future AI
systems. Notably, the Act proposes a risk-based approach to AI
regulation, wherein the obligations for an AI system correlate with
the level of risk it may pose. The Act includes provisions that
exempt research activities and AI components offered under
open-source licenses. The legislation also advocates for regulatory
sandboxes, i.e., controlled environments established by
public authorities, for testing AI before deployment. This approach
aims to balance the protection of fundamental rights with the need
for legal certainty in businesses and the stimulation of innovation
in Europe.

Current and Trending U.S. Approaches

By contrast, federal legislators in the U.S. continue keeping a
watchful gaze on AI, focusing more attention on funding research
into deciphering its capabilities and outcomes. These efforts are
fueled in part by the hope of understanding the breadth of AI to
potentially mitigate concerns in the regulatory space. After all,
advances made by AI technology may serve as mitigating tools to
some of the risks identified through the Act’s key principles.
The concept of federalism in the U.S. contributes to an already
burdensome dilemma of regulatory enforcement due to a patchwork
system of inconsistent state laws all hoping to be at the precipice
of the next major technological revolution. Indeed, various states
have already proposed laws regulating the development and use of
AI. For example, California has proposed a law (AB 331) regulating
the use of automated decision tools (including AI) and that would
require developers of these AI tools and users to submit annual
impact assessments.

Key Principles of the EU AI Act

Four Risk Levels

AI applications are categorized into four levels of risk:
unacceptable risk, high risk, limited risk, and minimal or no risk.
Any application that presents an unacceptable risk is
prohibited by default and cannot be deployed in the EU.

This includes AI systems that employ subliminal techniques or
manipulative tactics to alter behavior, exploit individual or group
vulnerabilities, categorize biometrics based on sensitive
attributes, evaluate social scoring or trustworthiness, predict
criminal or administrative offenses, create or expand facial
recognition databases through untargeted scraping, or infer
emotions in law enforcement, border management, workplaces, and
education. Minimal risk uses –by contrast – would
include systems deployed for product/inventory management or
AI-enabled platforms such as video games. Similarly, limited risk
systems would include chatbot or other AI-based systems meeting
disclosure standards necessary to give users the option to
alternatively speak with a human.

High-Risk Uses

The AI Act identifies the following uses as high-risk:

  1. Biometric identification and categorization of natural
    persons
    : AI systems intended to be used for the
    ‘real-time’ and ‘post’ remote biometric
    identification of natural persons.

  2. Management and operation of critical
    infrastructure
    : AI systems intended to be used as safety
    components in the management and operation of road traffic and the
    supply of water, gas, heating, and electricity.

  3. Education and vocational training: AI systems
    intended to be used for the purpose of determining access or
    assigning natural persons to educational and vocational training
    institutions; AI systems intended to be used for the purpose of
    assessing students in educational and vocational training
    institutions and for assessing participants in tests commonly
    required for admission to educational institutions.

  4. Employment, workers management, and access to
    self-employment:
    AI systems intended to be used for
    recruitment or selection of natural persons, notably for
    advertising vacancies, screening or filtering applications, or
    evaluating candidates in the course of interviews or tests; AI
    intended to be used for making decisions on promotion and
    termination of work-related contractual relationships, for task
    allocation, and for monitoring and evaluating performance and
    behavior of persons in such relationships.

  5. Access to and enjoyment of essential private services
    and public services and benefits:
    AI systems intended to
    be used by public authorities or on behalf of public authorities to
    evaluate the eligibility of natural persons for public assistance
    benefits and services, as well as to grant, reduce, revoke, or
    reclaim such benefits and services; AI systems intended to be used
    to evaluate the creditworthiness of natural persons or establish
    their credit score, with the exception of AI systems put into
    service by small scale providers for their own use; AI systems
    intended to be used to dispatch, or to establish priority in the
    dispatching of, emergency first response services, including by
    firefighters and medical aid.

  6. Law enforcement: AI systems intended to be
    used by law enforcement authorities for various purposes, including
    making individual risk assessments, detecting deep fakes,
    evaluating the reliability of evidence, predicting the occurrence
    or reoccurrence of an actual or potential criminal offence,
    profiling of natural persons, and conducting crime analytics.

  7. Migration, asylum, and border control
    management:
    AI systems intended to be used by competent
    public authorities for various purposes, such as detecting the
    emotional state of a natural person, assessing risks, verifying the
    authenticity of travel documents, and assisting in the examination
    of applications for asylum, visa, and residence permits.

  8. Administration of justice and democratic
    processes:
    AI systems intended to assist a judicial
    authority in researching and interpreting facts and the law and in
    applying the law to a concrete set of facts.

Prohibitions on “Social Scoring”

In the context of the AI Act, “social scoring” refers
to the practice of evaluating individuals based on their social
behavior or personality characteristics, often leveraging a wide
range of information sources. This approach is used to assess,
categorize, and score individuals, potentially affecting various
aspects of their lives, such as access to loans, mortgages, and
other services. The current draft includes a ban on social scoring
by public authorities in Europe. However, the European Economic and
Social Committee (EESC) has expressed concerns that this ban does
not extend to private and semi-private organizations, potentially
allowing such entities to use social scoring practices. The EESC
has called for a complete ban on social scoring in the EU, and for
the establishment of a complaint and redress mechanism for
individuals who have suffered harm from an AI system.

Blurred Lines – Unlawful Social Scoring vs. Appropriate
Data Analysis

The EESC has also urged that the AI Act should strive to
distinguish between what is considered social scoring and what can
be seen as an acceptable form of evaluation for a specific purpose.
The line, they suggest, can be drawn where the information used for
the assessment is not reasonably relevant or proportionate.
Furthermore, the EESC highlights the need for AI to enhance human
decision making and human intelligence, rather than replacing it,
and criticizes the AI Act for not explicitly expressing this
view.

Foundation Large Language Models

A significant aspect of the Act pertains to the regulation of
“foundation models”, like OpenAI’s GPT or
Google’s Bard. These models have attracted regulatory attention
due to their advanced capabilities and potential displacement of
skilled workers. Providers of such foundation models are required
to apply safety checks, data governance measures, and risk
mitigations before making their models public. Additionally, they
must ensure that the training data used to inform their systems
does not violate copyright law. The providers of such AI models
would also be obliged to assess and mitigate risks to fundamental
rights, health and safety, the environment, democracy, and rule of
law.

Impact on U.S. Business

The United States can expect some of the principles in the Act
to show up in both federal and state legislative proposals as the
nation seeks to bring the chaos brought on by AI into submission.
As a result of longstanding transatlantic partnerships built on
establishing commerce and trade, many U.S. companies are well
acquainted with the higher standards of the EU in areas such as
product safety rules and certain data rights. Consequently, we
should expect this trend to continue as commerce between nations
grows. The EU will likely continue requiring compliance from U.S.
companies in order to conduct business across the Atlantic and we
can expect the scope of such compliance to now involve AI. While
there are a myriad of ways for these concepts to manifest
themselves such as President Biden’s AI Bill of Rights, by mirroring certain
provisions of the Act states may find themselves encouraged to
develop their own regulatory schemes around the use of AI.
Businesses will need to remain diligent to shifting regulatory
structures and emerging enforcement mechanisms in the U.S. as the
nation grapples with change. The ultimate goal of the EU proposal
is to provide a regulatory framework for AI companies and
organizations that use AI, thereby facilitating a balance between
innovation and protection of citizens’ rights.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Leave a Response