If what’s past is prologue, then the U.S. mortgage industry should be looking to the European Union to see what very well might be coming next in terms of
Certainly, that was the case in 2018, when the EU adopted the Global Data Protection Regulation (GDPR), which has since become the model that several U.S. states have used to develop their privacy regulations. The European AI Act will apply broadly to any AI system developed or used in the EU. As is the case with GDPR, U.S. companies will come under this regulation if they do business in the EU.
The new regulation generally categorizes AI risk into four broad levels:
- Unacceptable Risk – Completely prohibited
- Examples: Social scoring by governments, systems that manipulate behavior
- High Risk – Permitted subject to strict oversight
- Examples: HR Recruiting, Credit Scoring, Underwriting
- Limited Risk – Permitted with specific transparency requirements
- Chatbots, AI generated content
- Minimal Risk- No restrictions
The most detailed framework is around high-risk AI systems, which covers
AI users would also be required to demonstrate that that are providing clear and adequate information to consumers and that they are taking prudent steps to ensure privacy and security.
Following the EU’s lead?
In the five years following the passage of GDPR, a number of U.S. states— including California, Vermont, Massachusetts and Colorado— have incorporated elements of those privacy standards into their regulations. Similarly, large U.S. companies with global footprints, including many in the financial services space, have developed their privacy practices to broadly comply with GDPR. Many observers expect this will be the case with AI as well.
Currently, U.S. regulators, at both the federal and state levels, have been relying on existing laws, such as the Consumer Financial Protection Act with its UDAAP provisions, to regulate the use of AI in financial services. Often these regulators are focusing on the same concern addressed in the AI Act, but in a more piecemeal fashion.
For example, at the local level, New York City’s Automated Employment Decision Tool Law, requires an annual audit of AI tools to root out bias. It also requires disclosure to applicants that AI or machine learning will be used to evaluate them. Privacy and security protections are mandated by the law as well.
Similarly, Colorado adopted the Algorithm and Predictive Model Governance Regulation in 2023 to ensure life insurers’ use of AI models does not result in unfairly discriminatory insurance practices with respect to race.
At the federal level,
This past June, several federal regulators, including the CFPB,
Adding to its 2022 guidance on the use of complex algorithms in credit decisions, the CFPB published a circular in September 2023 on adverse action notices and credit denials when AI is used.
Finally, this past Fall, the Biden Administration issued a wide-ranging Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
In all, the order requires more than 100 specific actions from over 50 federal entities. Although broader in scope than the EU’s AI Act, the executive order targeted the same core issue sets addressed in the proposed EU regulation: AI bias, consumer protection, privacy and security and permissible government use of AI. It also specifically pointed to international leadership on these issues. So, if the financial services industry is serious about preparing for new AI regulation, a careful reading of the new AI Act might just be the place to start.