Update: DOL Addresses AI in the Workplace
by Raabia Cheema, Misti Mukherjee, and David Sotolongo
The power of artificial intelligence (“AI”) is impactful, and AI-powered tools are now commonly used in the workplace. In October 2024, the Department of Labor (“DOL”) published guidance for employers on best practices for implementing AI technologies, in order to ensure that “workers benefit from the new opportunities and are shielded from potential harms.” This DOL guidance builds upon the agency’s previously released guidance on the impact of AI on the Fair Labor Standards Act and on equal employment opportunity for federal contractors.
AI Principles. In 2023, President Biden issued an Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (“AI Executive Order”), directing the DOL to develop best practices for employers, agencies, and federal contractors to mitigate the possible harms and maximize the benefits of AI for workers. In response, the DOL released a document outlining several artificial intelligence principles (“AI Principles”), to provide both AI developers and employers with a shared set of guidelines for implementing these new technologies. The AI Principles include centering worker empowerment, ethically developing AI systems, establishing AI governance systems (including data privacy), protecting and supporting worker rights, and ensuring transparency in AI use.
New Guidance. To help employers and AI developers implement the above AI Principles, the DOL issued additional guidance in October 2024. While the DOL warns that these practices are not “intended as a substitute for existing or future federal or state laws and regulations”, they can apply to a multitude of sectors and can be customized to fit the needs of individual workplaces.
The DOL recommendations are as follows:
Centering Worker Empowerment. Employers should begin by “centering workers’ experiences and empowerment” throughout the cycle of AI, by integrating “early and regular input” from employees about the use of AI technologies, as well as bargaining in good faith on the use of AI in union workplaces.
Ethically Developing AI. Developers should establish standards on the use of AI to “protect workers’ civil rights, mitigate risks to workers’ safety, and meet performance requirements.” The DOL also recommends that developers conduct impact assessments and independent audits of worker-impacting AI so that employers can understand the efficacy and strategic consequences of using AI products.
Establishing AI Governance and Human Oversight. Employers are encouraged to establish governance structures to provide coordination and consistency in worker-impacting AI systems. These structures should incorporate employee input and be bolstered by training employees about AI systems.
The DOL reiterates that employers should not rely solely on AI, or information collected through electronic monitoring, to make significant employment decisions. Employers should “ensure meaningful human oversight of any such decisions supported by AI systems.” Employers should additionally identify and document the kinds of significant employment decisions impacted by the use of AI and inform employees and job applicants as to how AI is used in those decisions.Ensuring Transparency in AI Use. Employers should provide appropriate disclosures and advance notice to employees if they intend to use worker-impacting AI. The guidance further states that employers should ensure that workers are informed about what data is collected and stored about them, and for what purpose.
Protecting Labor and Employment Rights. Employers should not use AI systems that “undermine, interfere with, or have a chilling effect on labor organizing and other protected activities.” Employers must not use AI systems to limit or detect labor organizing or other workers’ protected activities.
Employers should also mitigate any risks of AI systems on health and safety outcomes and ensure that they do not undermine workers, such as by reducing wages or break times. Employers should ensure that all systems maintain compliance with anti-discrimination laws and encourage workers to raise concerns about the use and impact of AI in the workplace.Using AI to Enable Workers. Prior to procuring AI technologies, employers should consider how those technologies could impact workers, and prior to deploying them broadly, employers should consider piloting the use of AI systems.
Supporting Workers Impacted by AI. In order to prevent displacing workers, employers should provide workers with appropriate training on how to use AI systems to complement their work, as well as support further education and training for upskilling.
Ensuring Responsible Use of Worker Data. Finally, employers should avoid the collection, retention, and other handling of worker data that is not necessary for a legitimate and defined business purpose. Employers should also secure and protect workers’ data.
How Should Employers Respond? Employers should audit and understand the current use of artificial intelligence in their ecosystems, evaluate the areas of noncompliance or ineffective governance, and address (proactively and transparently) the expectations of employees who use AI as a tool at work.
Conduct an audit of AI in your workplace. According to Microsoft’s 2024 Annual Work Trend Index, 75% of knowledge workers use AI at work—and of these employees, the vast majority are bringing their own AI to work. This means that employers may be unaware of how exactly AI is being used at the workplace and may be missing out on benefits that come from strategic and centralized use. Additionally, unregulated AI use can put company data, processes, and decisions at risk of legal challenge. Employers should identify the AI systems in use within their organizations, create processes for consistency, integrity, and nondiscrimination, and consider the elements of the DOL Guidance to examine strategic efficacy and impact.
Design and Implement AI Workplace Policies. Employers should design and establish guidelines, policies, and practices for how AI is used and managed in a manner that reflects the organization’s values and complies with legal requirements. For example, is AI used to closely monitor and manage employees? Does the employer record workers’ conversations or track their movements using wearable devices, cameras, radio-frequency identification badges, or GPS tracking devices? Does the employer monitor employees’ computers with keyloggers and software that take screenshots, webcam photos, or audio recordings throughout the day? If so, review this Memorandum from the General Counsel of the National Labor Relations Board on unlawful electronic surveillance and automated management practices when designing corporate practices about employee productivity, including disciplining employees who fall short of quotas, penalizing employees for taking leave, and providing individualized directives throughout the workday.
Do recruiters understand the discriminatory impact of using AI in recruitment and selection? The Equal Employment Opportunity Commission released a technical assistance document entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees”. While technology may be evolving, anti-discrimination laws still apply.
And even more recently, a policy statement of the Consumer Financial Protection Bureau provides that employers who purchase or use certain reports generated about current or prospective employees, including those using AI-powered technologies to assess employees’ productivity, are required to comply with various requirements of the Fair Credit Reporting Act, including obtaining consent from employees prior to purchasing such reports and providing notices to employees before taking adverse employment actions based on such reports.
These are just a few examples of the legal implications of using AI at work. Approaching AI governance with an informed approach, examining all angles of use and legal impact, engaging employee stakeholders, and understanding current practices are essential to designing and implementing clearly defined HR policies that will ensure the ethical, consistent, compliant, and transparent use of AI technology.Design and Conduct Tailored Employee Training. AI is already integrated into various industries, and a multigenerational workforce will naturally seek out technology and tools that are perceived to optimize performance. When new policies are implemented, they are only as effective as they are understood. Employers should provide tailored training to employees on how to use AI for their role or function, once the policies and governance structure have been decided.
Contact us to learn more.