Meta employees in the U.S. protested a new internal AI tool that tracks mouse movements and keystrokes in mid-May 2024 [1].
The backlash highlights a growing tension between corporate AI development and worker privacy. Employees said that using staff activity to train artificial intelligence creates an intrusive environment of surveillance that erodes trust within the company [3, 4].
The software in question monitors how employees interact with their computers to gather data for AI training [2, 3]. Workers said the tool is an intrusive form of workplace surveillance that threatens their privacy [3, 4]. A primary point of contention is the lack of a clear opt-out mechanism for those whose data is being harvested [3].
These protests occurred during a period of high instability for the workforce. Meta plans to fire roughly 8,000 staff worldwide [1]. These layoffs are scheduled for 20 May 2024 [1].
The timing of the software rollout, days before the scheduled job cuts, has intensified the anger among the remaining staff [1]. Employees said the tracking software could be used to identify candidates for termination or to automate roles that are currently held by humans [3].
Meta has not provided a public rebuttal to the specific claims regarding the lack of opt-out options. However, the internal unrest reflects a broader trend of tech workers resisting the use of their own professional output to build the tools that may eventually replace them [3, 4].
“Meta employees in the U.S. protested a new internal AI tool that tracks mouse movements and keystrokes.”
This conflict illustrates the 'AI paradox' facing tech giants: the need for high-quality human data to train next-generation models often requires monitoring the very employees who are most likely to resist such surveillance. By implementing tracking tools immediately before a mass layoff, Meta has linked the fear of automation with the fear of surveillance, potentially damaging long-term employee retention and morale.





