October 21, 2024 – The EU and US AI law landscape feels like a repeat of 2020 data privacy law. At the time, the General Data Protection Regulation (GDPR) was in full effect, while California and other states still were . developing privacy laws at breakneck speed. Many companies were unaware of the GDPR, but were faced with a new onslaught of US privacy laws on a state-by-state basis.
Now companies are facing the same problem. The EU just passed a comprehensive AI law, the EU AI Act, which imposes significant compliance obligations and mega antitrust-style fines.
In the United States, legislatures are passing AI bills at a breakneck pace, with varying thresholds, coverage, and subject matter. Are global companies taking the plunge and complying with the EU AI Act globally, or should there be a more nuanced approach per jurisdiction?
Extensive and imposing
In addition to outlining prohibited practices, the EU AI Act also contains a list of high-risk AI practices. This includes, but is not limited to, the use of AI in employment decisions, credit scoring, insurance, and access to services. For these high-risk AI practices, AI providers must implement a complete risk management program that takes into account the following factors:
•Data management
•Technical documentation
•Registration
•Human supervision
•Accuracy, robustness and cybersecurity management
•Quality management
The US, on the other hand, is taking a patchwork approach. Instead of comprehensive federal legislation, we see a state-by-state and agency-by-agency approach. To date, these laws generally fall into four main categories: (i) consumer protection; (ii) labor rights; (iii) image and likeness rights; and (iv) transparency/risk assessment requirements for high-risk AI processing.
Consumer protection
In terms of AI consumer protection law, Utah is among the first. In May 2024, it added requirements for AI to its consumer protection statutes. The Utah AI Policy Act requires Utah companies to disclose the use of generative AI tools, and also holds companies liable for any consumer protection violations caused by these generative AI tools.
At the federal level, the FTC has used its consumer protection authority under Section 5 of the FTC Act to regulate unfair and deceptive trade practices related to AI. In 2022, Weight Watchers agreed to pay a $1.5 million civil penalty in a settlement with the FTC, partly over allegations that the company improperly collected data from children to train its models and algorithms. This settlement included “algorithmic disgorgement” – that is, Weight Watchers was required to remove all models trained on such data.
AI in employment decision making
At the employment level, Illinois recently passed a law that prohibits the use of AI systems from discriminating against employees or applicants based on protected classes.
Where employers use AI systems “to substantially support or replace discretionary decision-making,” Local Law 144 requires publicly available third-party audits of automated employment decision-making tools.
Generative AI is also regulated by state laws and lawsuits regarding image and likeness rights. Following the strike by actors and writers in Hollywood, and high-profile lawsuits by Sarah Silverman and others, California has taken action. This past week, Governor Gavin Newsom signed two AI bills aimed at protecting entertainers.
AB 2602 requires contracts with actors and other performers to specify whether generative AI will be used to create a replica of the performer’s voice or likeness. AB 2836 prohibits the use of digital replicas for deceased artists without permission from the artist’s estate.
Image and likeness rights
The majority of US state comprehensive data privacy laws require transparency regarding the use of AI to process personal data and make decisions that affect important rights such as employment, housing and access to services. In addition, these laws generally give consumers the right to opt-out of such processing.
Colorado’s AI law, set to take effect in 2026, goes even further. It imposes risk assessment and bias assessment requirements for any “high-risk artificial intelligence system” that makes a substantial decision or is a substantial factor in making a resulting decision.
For the purposes of the Act, “consequential decision” means a decision that has a material or similarly significant effect on the provision or refusal to a consumer of, or on the costs or terms of:
•Education
•Employment
•Financial services or lending
•Essential government services
•Care services
•Housing
•Insurance
•Legal services
The Colorado AI Act has even more substantial transparency and notification requirements. To give just one example, developers and operators of “high-risk” AI systems are required to publicly post on their websites a description of the high-risk systems, and describe how the AI system addresses the risks of bias. This includes further reporting to the Attorney General “any known or reasonably foreseeable risks of AI discrimination arising from the intended use of the system.” Section §6-1-1702(5).
Where to go from here?
The trend lines are clear and AI legislation is here to stay. While the US has not passed federal AI legislation with the same scope as the EU AI Act, we are already seeing significant risk assessment and transparency requirements. As a result, AI companies must go global with their AI risk management strategies and not be left behind.
Sign up here.
The opinions expressed are those of the author. They do not reflect the views of Reuters News, which is committed to integrity, independence and freedom from bias under the Trust Principles. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.