Research from Cornell University reveals that organisations utilising AI to monitor employee behaviour and productivity may face increased complaints, reduced productivity, and higher turnover rates unless the technology is perceived as a developmental tool rather than merely evaluative. The study highlights that surveillance technologies are often seen as more intrusive than human oversight, leading to employees’ loss of autonomy. This perception can trigger resistance and negatively impact performance, underscoring the unintended consequences of using rapidly evolving technologies for employee assessment.
The researchers, including Emily Zitek, an associate professor of organisational behaviour, and first author Rachel Schlund, argue that there is an opportunity to foster positive change in employee behavior and productivity with the acceptance of surveillance tools. They propose that if these tools are framed as supportive of employee development rather than as judgmental tools lacking in context and accuracy, they can lead to improved performance and acceptance. Their findings are detailed in the paper “Algorithmic Versus Human Surveillance Leads to Lower Perceptions of Autonomy and Increased Resistance.”
In their research, Zitek and her team conducted four experiments with nearly 1,200 participants. In one experiment, participants reflected on their experiences of being monitored by either AI or human supervisors, noting a marked decrease in autonomy and an increase in resistance behaviours when monitored by AI. Another set of experiments involved participants brainstorming in groups and individually under the watch of either a human assistant or an AI tool presented via a Zoom videoconference as “AI Technology Feed.” Feedback suggesting insufficient idea generation was given in both scenarios, but the AI’s feedback led to notably fewer ideas and higher critique rates than human feedback.
The researchers also explored scenarios in a simulated call centre, where participants were monitored by either humans or AI for performance evaluation or developmental feedback. When the surveillance was framed as developmental, participants perceived less infringement on their autonomy and showed no increased intention to quit.
The findings suggest that organisations should carefully consider how they implement surveillance technologies. By emphasising developmental support and allowing for contextualisation of the surveillance data, organisations can mitigate the negative impacts of AI monitoring and improve employee acceptance and performance. This research underscores the delicate balance required in the deployment of AI surveillance tools within the workplace, respecting and valuing employee autonomy.
More information: Rachel Schlund et al, Algorithmic versus human surveillance leads to lower perceptions of autonomy and increased resistance, Communications Psychology. DOI: 10.1038/s44271-024-00102-8
Journal information: Communications Psychology Provided by Cornell University