Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

EU: Proposed Artificial Intelligence Law Could Affect Employers Globally


A woman's face is shown on a computer screen.


​Companies with employees in the European Union (EU) could be affected by a landmark proposal to regulate the use of artificial intelligence (AI) across the region.

The EU Artificial Intelligence Act, now working its way through the legislative process, is expected to shape technology and standards worldwide.

The act comprises a broad set of rules seeking to regulate the use of AI across industries and social activities, noted Jean-François Gerard, a Brussels-based attorney for Freshfields Bruckhaus Deringer.

The AI regulation proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rule, he said. The proposal would classify different AI applications as unacceptable, high, limited or minimal risks, according to a client briefing Gerard helped produce.

The proposed law, which the European Commission (EC) introduced in April 2021, is expected to play a major part in shaping AI in the EU, serve as a model for regulatory authorities around the world and affect companies globally that have operations in Europe.

"The AI Act aims to ensure that AI systems placed and used in the European market are safe and respect existing legislation on fundamental rights and values of the European Union, among which is the General Data Protection Regulation [GDPR]," said Johanne Boelhouwer, an attorney with Dentons in Amsterdam.

"In this way, the act facilitates an internal market for safe and reliable AI systems while preventing market fragmentation. AI needs to be legitimate, ethical and technically robust," she added.

A Risk-Based Approach

In its risk-based approach, the AI Act distinguishes between allowing a light legal regime for AI applications with negligible risk and banning applications with unacceptable risk, Boelhouwer said. "Between these extremes, stricter regulations apply as the risk increases. These range from nonbinding self-regulatory soft law impact assessments with codes of conduct to onerous, externally audited compliance requirements," she said.

High-risk AI systems would be allowed in the European market if they meet mandatory requirements and undergo a prior conformity assessment, according to Boelhouwer, who said these systems must meet rules related to data management, transparency, record keeping, human oversight, accuracy and security.

The proposal will become law once both the EC—representing EU member states—and the European Parliament agree on a common version of the text, she said. Negotiations should be complex, given the thousands of amendments that political groups proposed in the European Parliament, Boelhouwer said.

Beyond its application in EU member countries, the act is expected to embed norms and values into AI technology's architecture, extending its influence beyond Europe, according to Marc Elshof, an attorney with Dentons in Amsterdam.

Employment Uses: High Risk

AI systems used in employment contexts such as recruiting and performance evaluation would be considered "high risk" under the draft legislation and subject to heavy compliance requirements, legal experts said.

"This will be new to the many employers who have been using so-called people analytics tools for years with limited compliance requirements" other than data privacy and, in some jurisdictions, the need to inform and consult with employee representatives, Gerard said.

Employers would need to ensure the AI systems they consider using in these contexts meet all of the requirements of the act, including that they have successfully undergone a conformity assessment and compliance certification, Boelhouwer said. The conformity assessment would be an obligation for AI systems providers, but employers shouldn't use systems that haven't passed the assessment, she said.

"For example, employers who consider giving employees a bad performance review on the basis of algorithms that can read people's feelings through text, voice tone, facial expressions and gestures can't simply implement such systems without ensuring compliance with the AI Act," she said.

Discriminatory Impact

Such AI systems may appreciably impact people's future career prospects and livelihoods, Boelhouwer noted.

"The act emphasizes that companies should be very mindful of biases in the AI systems throughout the recruitment process's evaluation, promotion or retention of those persons. These AI systems may lead to discrimination, for example, against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation," she said.

AI systems used to monitor employees' performance and behavior may also affect their rights to data protection and privacy, and employers should continue to comply with the GDPR when the use of AI systems involves processing personal data, she added.

"Recruitment algorithms, which have been widely used by large employers, especially in the tech industry, led to some heated discussion about algorithmic bias and discrimination," Gerard said, adding that some have called for banning AI from recruitment.

The act would apply to AI users inside the EU as well as to those in other countries if the system's output, such as content, recommendations or decisions, affects activity in the EU.

Dinah Wisenberg Brin is a reporter and writer based in Philadelphia.

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement