Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Proceed with Caution When Remotely Monitoring Employees in the EU


The earth at night with lights shining on it.


​One effect of COVID-19 has been a sharp increase in businesses' use of remote surveillance solutions to protect corporate resources and monitor the productivity and behavior of employees working away from the office for the foreseeable future. Although such tools can provide valuable performance insights and mitigate data loss and other risks, they can also significantly increase a business's legal risk.

This is especially true for businesses with employees working in the European Union (EU), where employee privacy is typically protected to a much greater extent than in the United States. Indeed, the German subsidiary of international retailer H&M recently learned a 35.3 million euro (approximately US$41 million) lesson about these legal risks after being fined by a supervisory authority in connection with a workforce monitoring program that "led to a particularly intensive encroachment on employees' civil rights."

Employers are permitted to monitor EU employees at work, so long as they comply with the laws and regulations of both the EU and individual member states. This includes the EU's General Data Protection Regulation (GDPR), which applies to any U.S. or multinational business that has employees in, or monitors the behaviors of, individuals in the EU.

Remote surveillance solutions increasingly offer sophisticated features that promise—among other things—to identify suspicious activity, detect potential insider threats and provide real-time alerts about employee behaviors. But automated technologies that generate insights or conclusions about employees based on data collected from employer-monitored systems, networks and connected endpoints can generate additional risk because the GDPR (as well as the laws of some individual member states) provides protections for individuals subject to automated decision-making and profiling.

Further, the use of employee surveillance solutions powered by artificial intelligence (AI) and machine learning (ML) technologies may trigger additional compliance requirements under the GDPR. In addition to exploring those issues and others, this article offers risk-mitigation strategies that employers should consider before monitoring employees in the EU.

Businesses that Monitor Employees in the EU Must Comply with the GDPR

Remote surveillance programs can generate large amounts of personal data about a business's employees. Common features include individual keystroke logging, live recording or screenshots of application windows or device screens, and monitoring of activity on websites and applications.

All of these forms of data collection are subject to the GDPR and businesses must be in compliance with the law when processing EU employees' personal data in connection with remote electronic surveillance. Note that "process" is defined broadly enough to capture essentially any operation performed on personal data.

The GDPR specifically protects "natural persons, whatever their nationality of place of residence, in relation to the processing of their data," when such natural persons are in the EU. Personal data is broadly defined, and includes any information relating to an individual who can be identified by reference to an identifier such as name, an identification number, location data or an online identifier.

Unlike some U.S. laws (such as the California Consumer Privacy Act), the GDPR does not include carve-outs for personnel records or other employee-related information. This means that the GDPR protects personal data relating to an employee working from France or Germany, even if that same information would not be protected for an employee working in New York or California.

Lawfulness, fairness, and transparency are three key principles of the GDPR, which among other things requires that a business identify a lawful basis for processing their EU employees' personal data, be transparent about how and why it is being processed, and refrain from using it in a way that is unduly detrimental, unexpected or misleading to the individuals concerned, or is otherwise unlawful.

In assessing lawful basis, the principles of necessity and proportionality must also be considered: the processing of the employees' personal data must be objectively necessary to achieve the stated purposes of conducting the remote surveillance, and it must not be possible to achieve those aims by the processing of less data or some other less intrusive means. In sum, employers' interests in monitoring employees must always be deliberately, fairly and transparently balanced against employees' rights to data protection and privacy.

The importance of conducting this assessment was highlighted by the case of Bărbulescu, in which the European Court of Human Rights found that a Romanian company's decision to fire a sales engineer for using a personal Internet chat account on his work computer failed to strike a fair balance between the employee's right to respect for his private life and correspondence and the employer's right to take measures in order to ensure the smooth running of the company.

Because consent is rendered invalid by "[a]ny element of inappropriate pressure of influence" affecting the individual's decision, the GDPR notes that consent can be relied upon as the lawful basis for processing in the employer-employee context only "in a few exceptional circumstances." If an employer relies on "legitimate interest" as the lawful basis, it must also inform monitored employees of their right to object to the processing and establish straightforward methods for them to do so.

An employee who objects to the processing must provide specific reasons for doing so that are based on his or her particular circumstances. In order to continue processing, the employer must be able to demonstrate compelling legitimate grounds that override the objection. Essentially, this would require the employer to conduct a fact-specific balancing test of the employer's legitimate interest in the processing against the employee's grounds for objection.

Using AI- and ML-Powered Features May Trigger Additional Requirements

An increasing number of remote surveillance programs have harnessed the power of ML and AI to analyze collected data and derive insights about monitored employees. Examples include features that generate reports about employee productivity or that scan employee communications to detect and provide real-time alerts of potential data security or other company policy violations. To the extent that the information generated or analyzed by the program relates to an identified or identifiable employee, it constitutes personal data that is protected under the GDPR.

More sophisticated AI- or ML-enabled employee monitoring programs may carry out automated decision-making or profiling (the automated processing of personal data to evaluate certain aspects about an individual, including analyzing or predicting work performance) based on the data collected. If the automated decision-making, which can include profiling, is conducted without any human involvement (called "solely automated decision-making" in the GDPR), it may be subject to additional requirements under Article 22 of the GDPR.

Program features that automatically notify management about potentially malicious activity by an employee, or that calculate and assign a security risk score to an employee based on his or her network activity, are examples of solely automated decision-making processes that can trigger additional requirements.

If a program feature relies on solely automated decision-making, the business should conduct a data protection impact assessment (DPIA) to determine whether its use may have a legal or similarly significant effect on the monitored employees. A decision that can jeopardize or adversely affect the terms of a monitored individual's employment is very likely to be considered a significant effect. In that case, a business may not use the feature without the explicit consent of the monitored employees—the most common scenario—or unless (much less likely) it is necessary for the entry into or performance of a contract, or otherwise authorized under the EU or applicable member state law.

Where solely automated decision-making is permitted, businesses must also specifically provide the monitored employees with information about those processes, establish straightforward methods for them to request human intervention or challenge a decision, and regularly verify that the decision-making feature is working as intended. If a large number of employees are being monitored, businesses should also consider appointing a data protection officer who is qualified to oversee the program.

Additional Requirements Must Be Satisfied to Monitor Sensitive Personal Data

Additional requirements will apply if a business's employee monitoring program collects or considers sensitive personal data, which sensitive personal data cannot be processed unless one of 10 exceptions are met. Since consent is often invalid in the employer-employee relationship, as noted above, businesses should seek to establish one of the other listed exceptions.

The categories afforded special protections under Article 9 of the GDPR include personal data that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic information, biometric identifiers, health information, or information concerning a person's sex life or sexual orientation. Businesses should therefore scrutinize their use of features that may result in the processing of this kind of data.

Any monitoring system that either (a) captures images or recordings of the employee or their home, or (b) tracks employees' computer or network usage beyond their interaction with the employer's own network (such as websites visits) are particularly likely to involve the processing—even if inadvertently—of sensitive personal data, and should be deployed only after thorough review.

Risk-Mitigation Strategies to Consider Before Monitoring EU Employees

Employers that engage in remote monitoring of employees in the EU can lessen their legal risks by taking one or more of the following steps:

  • Perform a DPIA before launching any employee surveillance program, paying careful attention to the potential for acquiring and processing sensitive personal data, and the potential impacts of features that engage in automated decision-making or profiling.
  • Make certain that all processing of personal data by the surveillance program can and will be carried out in accordance with all the requirements of the GDPR. This may require certain features to be customized or disabled.
  • Ensure that the program also complies with applicable country-specific privacy and labor requirements, which may be stricter than the GDPR.
  • Verify that the program does not violate any existing union collective bargaining agreements or works council agreements that provide additional or stricter requirements than applicable laws or regulations, which are increasingly common for larger U.S.-based multinational companies operating throughout Europe.
  • Obtain and document employees' prior, informed consent to the monitoring while working remotely.

The legal implications of employee monitoring for a particular business will depend on the features of the surveillance program and how the tool is deployed. Because these factors will necessarily vary, businesses should be sure to understand how a remote employee surveillance solution works and develop a compliance strategy before it is launched in order to avoid increased risk—or potential violations—under the GDPR.

Kara K. Trowell is an attorney with Davis Wright Tremaine in New York City. © 2020 Davis Wright Tremaine. All rights reserved. Reposted with permission of Lexology.

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement