In the rapidly evolving world of AI, federal regulators are once again signaling that companies and HR managers cannot rely on a “data made me do it” defense against employment decisions made with the help of AI systems. Building on guidance it issued in May, the U.S. Department of Labor (DOL) issued new guidance on October 16e reminding employers that they cannot hide behind an algorithm when an AI-generated employment decision violates federal law.
While the guidance states that no additional rules are being created, employers are reminded that existing laws still apply to their use of AI. In other words: if you couldn’t do it without AI, you still can’t do it of AI. While this sounds simple enough, ensuring compliance with existing laws can be difficult for most employers, who have little to no control over the AI platforms they use. Therefore, employers need to know what responsibilities (per the DOL) they have when using AI in the workplace.
The DOL’s guidance is based on eight high-level principles first announced in President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The guidance is heavy on the use of ambitious language, but here is a summary of the practical advice:
- Centering employee empowerment – The DOL encourages employers to involve employees (particularly those in underserved communities) in the design, development, testing, training, procurement, deployment, use, and supervision of AI. The directive borrows language from the traditional employment context and directs employers to negotiate “in good faith” with unions regarding the use of AI (particularly monitoring) in the workplace.
- Ethical development of AI – This part of the guidance focuses on the civil rights, safety and job quality of employees. Some pitfalls the DOL warns against include AI systems with high error rates or systems that evaluate employees based on discriminatory performance standards. To combat these problems, the DOL is encouraging AI developers to create jobs dedicated to training and refining AI systems.
- Establishing AI governance and human oversight – As the title suggests, this segment encourages employers to regularly evaluate and refine their AI systems. The DOL recommends that employers: (a) adopt policies that govern the implementation and use of AI; b) provide training on AI systems, including how to interpret AI recommendations; (c) limiting the role of AI in making “key employment decisions”; (d) have an appeals process to challenge AI recommendations; and e) ensure that these systems are regularly monitored.
- Ensuring transparency in the use of AI – The guideline recommends informing employees in advance about the use of AI systems by employers, what data they collect and how that data is used. It also encourages employers to allow employees to challenge AI decisions and submit proposed corrections without retaliation for making these reports.
- Protection of labor and labor rights – Employers are reminded that their AI-generated decisions are just as subject to employment law as those made by HR professionals and hiring managers. Employers using AI in their recruitment processes – and especially those using third-party platforms for AI decisions – must ensure that their systems do not convey discriminatory assumptions or have a disparate impact on protected candidate groups.
- Using AI to empower employees – Referring to the Good Job Principles, the DOL generally encourages employers to use AI in ways that improve job quality for their employees. Practical suggestions include testing AI systems before deploying them, minimizing employee monitoring, and using AI to improve the predictability of workflows and scheduling.
- Supporting employees affected by AI – The DOL recognizes that the introduction of AI into the workplace means some jobs may become obsolete. In response, the DOL is encouraging employers to retrain workers likely to be replaced by AI and, if possible, find other positions for them within the organization.
- Ensure responsible use of employee data – Finally, the Directive emphasizes the need to protect employee privacy, emphasizing that employers should not collect more data about employees than is necessary to make legitimate employment decisions and protect employee data from unauthorized access. Although the DOL’s guidelines do not have the force of law, they provide insight into how the DOL can enforce current laws regarding the use of AI in the workplace. And depending on the outcome of the election, it could also foreshadow regulations the agency will propose in the future. We will continue to monitor all developments, but in the meantime, remember that the DOL will not be buying a “the data made me do it” defense.
Although the DOL’s guidelines do not have the force of law, they provide insight into how the DOL can enforce current laws regarding the use of AI in the workplace. And depending on the outcome of the election, it could also foreshadow regulations the agency will propose in the future. We will continue to monitor all developments, but in the meantime, remember that the DOL will not be buying a “the data made me do it” defense.