Meta is rolling out new internal software that will monitor employees’ mouse movements, clicks and keystrokes to train its artificial intelligence models, marking a significant shift in how tech firms gather real-world data for next-generation AI systems. The initiative, revealed in internal memos, involves installing tracking tools on work computers of US-based employees. The system will collect detailed interaction data, including how staff navigate software, use shortcuts and perform everyday digital tasks. The data will also include occasional screen snapshots to provide context, allowing AI models to better understand how humans interact with digital interfaces. Push to build AI agents The move is part of Meta’s broader effort to develop advanced AI agents capable of performing complex workplace tasks autonomously. According to internal communication, the goal is to improve areas where AI still struggles, such as selecting options from menus or executing multi-step workflows. “For agents to understand how people actually complete everyday tasks using computers, we need to train our models on real examples,” an internal note said. Meta has increasingly shifted its strategy toward becoming an AI-first company, investing heavily in automation tools and restructuring teams around AI-driven workflows. No opt-out, concerns rise The rollout has sparked unease among employees, particularly after reports that participation is mandatory on company-issued devices. Internal discussions cited in reports show some employees questioning the lack of an opt-out option and expressing discomfort with the level of monitoring. Despite the backlash, Meta has said safeguards are in place to protect sensitive information and that the data will not be used for performance evaluation. The company also emphasised that monitoring work activity on corporate devices is not a new practice, though the scale and purpose of the new system are significantly broader. Privacy and regulatory questions The development has raised fresh concerns about workplace surveillance and data privacy, particularly as companies expand the use of AI training datasets. Legal experts note that while US regulations allow such monitoring in many cases, similar practices could face restrictions in Europe under stricter data protection laws. Analysts say the initiative reflects a growing trend among tech companies to rely on real-world behavioural data to improve AI performance, especially as traditional datasets become saturated. Part of wider industry shift Meta’s move comes amid a broader transformation in the technology sector, where companies are racing to build AI systems capable of automating large portions of digital work. The company has already reorganised parts of its workforce under AI-focused teams and encouraged employees to integrate AI tools into daily operations. Industry experts say the use of employee-generated data highlights the increasing value of human-computer interaction patterns in training AI systems. However, they caution that such approaches could redefine workplace boundaries and expectations, particularly if monitoring becomes more widespread. For now, Meta’s initiative underscores how the race for AI dominance is pushing companies to explore new and controversial sources of training data.