Hosting
Wednesday, February 5, 2025
Google search engine
HomeArtificial IntelligenceComparison of EU and US AI legislation: déjà vu until 2020

Comparison of EU and US AI legislation: déjà vu until 2020


October 21, 2024 – The EU and US AI law landscape feels like a repeat of 2020 data privacy law. At the time, the General Data Protection Regulation (GDPR) was in full effect, while California and other states still were . developing privacy laws at breakneck speed. Many companies were unaware of the GDPR, but were faced with a new onslaught of US privacy laws on a state-by-state basis.

Now companies are facing the same problem. The EU just passed a comprehensive AI law, the EU AI Act, which imposes significant compliance obligations and mega antitrust-style fines.

In the United States, legislatures are passing AI bills at a breakneck pace, with varying thresholds, coverage, and subject matter. Are global companies taking the plunge and complying with the EU AI Act globally, or should there be a more nuanced approach per jurisdiction?

Extensive and imposing

The EU AI law is a comprehensive law that has been under development by EU regulators for years. One of its unique features, which is not found in US law, is a complete ban on certain ‘prohibited AI practices’ (Article 5opens a new tab). Some of these prohibited practices include assessing whether an individual is likely to commit a crime and real-time biometric identification by law enforcement (think Minority Report), as well as social scoring of individuals.

In addition to outlining prohibited practices, the EU AI Act also contains a list of high-risk AI practices. This includes, but is not limited to, the use of AI in employment decisions, credit scoring, insurance, and access to services. For these high-risk AI practices, AI providers must implement a complete risk management program that takes into account the following factors:

•Data management

•Technical documentation

•Registration

•Human supervision

•Accuracy, robustness and cybersecurity management

•Quality management

Like the GDPR, the EU AI Act imposes significant fines. This could amount to up to $35,000,000 or 7% of total global revenues, whichever is higher, for engaging in prohibited AI practices (Article 99opens a new tab), and up to $15,000,000 euros or 3% of total worldwide annual turnover, whichever is higher for other offenses (Article 99). The law requires each EU country to appoint at least one independent and impartial body to monitor and enforce the requirements of the EU AI Act.

The US, on the other hand, is taking a patchwork approach. Instead of comprehensive federal legislation, we see a state-by-state and agency-by-agency approach. To date, these laws generally fall into four main categories: (i) consumer protection; (ii) labor rights; (iii) image and likeness rights; and (iv) transparency/risk assessment requirements for high-risk AI processing.

Consumer protection

In terms of AI consumer protection law, Utah is among the first. In May 2024, it added requirements for AI to its consumer protection statutes. The Utah AI Policy Act requires Utah companies to disclose the use of generative AI tools, and also holds companies liable for any consumer protection violations caused by these generative AI tools.

At the federal level, the FTC has used its consumer protection authority under Section 5 of the FTC Act to regulate unfair and deceptive trade practices related to AI. In 2022, Weight Watchers agreed to pay a $1.5 million civil penalty in a settlement with the FTC, partly over allegations that the company improperly collected data from children to train its models and algorithms. This settlement included “algorithmic disgorgement” – that is, Weight Watchers was required to remove all models trained on such data.

More recently, on September 25, 2024, the Federal Trade Commission (FTC) cracked down on companies making misleading or fraudulent claims about their use of AI tools. This included taking action against DoNotPayopens a new taba company that claimed to offer an AI service that was “the world’s first robot lawyer.”
DoNotPay agreed to a $193,000 settlement with the FTC, pursuant to a consent order. The consent orderopens a new tab also requires that DoNotPay refrain from “representing that its Service or any other Internet-enabled product or service it offers acts as a human lawyer or other type of professional, unless that representation is not misleading and DoNotPay has competent and reliable evidence has the representation to substantiate it.” In addition, DoNotPay is required to notify consumers of the order and file compliance reports with the FTC.

AI in employment decision making

At the employment level, Illinois recently passed a law that prohibits the use of AI systems from discriminating against employees or applicants based on protected classes.

Additionally, this amendment explicitly prohibits the use of race or zip code when used as a proxy for race in AI systems that make employment decisions. Illinois’ requirements align with New York City Local Law 144opens a new tab in regulating automated decision-making tools in the field of employment. Although Local Law 144 does not explicitly prohibit the use of race or zip code in AI systems, it does have very strict notification and audit rights.

Where employers use AI systems “to substantially support or replace discretionary decision-making,” Local Law 144 requires publicly available third-party audits of automated employment decision-making tools.

Generative AI is also regulated by state laws and lawsuits regarding image and likeness rights. Following the strike by actors and writers in Hollywood, and high-profile lawsuits by Sarah Silverman and others, California has taken action. This past week, Governor Gavin Newsom signed two AI bills aimed at protecting entertainers.

AB 2602 requires contracts with actors and other performers to specify whether generative AI will be used to create a replica of the performer’s voice or likeness. AB 2836 prohibits the use of digital replicas for deceased artists without permission from the artist’s estate.

Image and likeness rights

The majority of US state comprehensive data privacy laws require transparency regarding the use of AI to process personal data and make decisions that affect important rights such as employment, housing and access to services. In addition, these laws generally give consumers the right to opt-out of such processing.

Colorado’s AI law, set to take effect in 2026, goes even further. It imposes risk assessment and bias assessment requirements for any “high-risk artificial intelligence system” that makes a substantial decision or is a substantial factor in making a resulting decision.

For the purposes of the Act, “consequential decision” means a decision that has a material or similarly significant effect on the provision or refusal to a consumer of, or on the costs or terms of:

•Education

•Employment

•Financial services or lending

•Essential government services

•Care services

•Housing

•Insurance

•Legal services

The Colorado AI Act has even more substantial transparency and notification requirements. To give just one example, developers and operators of “high-risk” AI systems are required to publicly post on their websites a description of the high-risk systems, and describe how the AI ​​system addresses the risks of bias. This includes further reporting to the Attorney General “any known or reasonably foreseeable risks of AI discrimination arising from the intended use of the system.” Section §6-1-1702(5).

Where to go from here?

The trend lines are clear and AI legislation is here to stay. While the US has not passed federal AI legislation with the same scope as the EU AI Act, we are already seeing significant risk assessment and transparency requirements. As a result, AI companies must go global with their AI risk management strategies and not be left behind.

Sign up here.

The opinions expressed are those of the author. They do not reflect the views of Reuters News, which is committed to integrity, independence and freedom from bias under the Trust Principles. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

Buy licensing rights

Lily Li is the founder and president of Metaverse Law. She advises global clients on their AI risk assessments and data protection impact assessments, and supports her clients’ overall governance, risk and compliance (GRC) programs. She also holds the GIAC Certified Forensic Analyst (GCFA) certification for advanced incident response and digital forensics and information privacy certifications such as the FIP, CIPP/US/E/M. She is based in Newport Beach, California, and can be reached at info@metaverselaw.com.



Source link

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular