Artificial Intelligence (AI) is often cast as the workplace bad guy, threatening constant Big Brother style personal surveillance, unfair discrimination and ultimately to take over jobs.
But can the supposedly dehumanising technology help make workplaces more human? That’s the question organisational behaviour researchers at Ã÷ÐÇ°ËØÔ Business School hope to answer.
Using intelligent algorithms to keep tabs on worker wellbeing can help managers spot and respond to staff distress and treat workers with compassion, reckons Dr Ace Simpson. That can also help companies hold on to staff and improve their bottom line.
“Responding to the suffering of employees in a timely manner enhances employee wellbeing, resilience, engagement, commitment and loyalty,” he said. “With AI pattern recognition, it will likely be possible to begin preparing to provide support before the employee even realises they need it.”
Changes in mood and routines, like a spike in sick days, lateness or missed deadlines, are signs that can tell switched-on managers someone is struggling at work. And computers can learn to do the same thing, Simpson told The Academy of Management Conference in Boston.
“Changes in routine could be a sign of workplace harassment, grief, home abuse or any other number of issues. AI pattern recognition has potential to support noticing, appraising and responding to employee suffering more quickly than humans, ensuring fewer people fall through the cracks.”
Mood changes among teams is what Tokyo company AIR’s tool Vibe picks up on. Vibe scans workers’ chat for emoji and trigger words to monitor morale. A bot pools the data and flags up mood changes to managers in real-time, honing in on happiness, disappointment, disapproval, irritation and stress.
Shanghai technology firm Deayea, meanwhile, monitors high-speed train drivers’ brainwaves for stress and fatigue. If they doze off, an alarm sounds in the cabin. Hangzhou Zhongheng Electric likewise uses neuro monitoring to highlight spikes in depression or anxiety in factory workers. Managers then change production line pace or boost breaks. The company said its ‘emotional surveillance programme’ boosted profits to the tune of hundreds of millions of dollars.
“With these examples in mind, it’s essential to think about the ethical implications of AI-assisted workplace compassion. We see major risks from its widespread use such as invasion of privacy, people coming to rely on AI to tell them when to show compassion, and oversimplifying human experience into algorithms.”
Dr Simpson’s study, suggests an AI model to boost workplace compassion as: Noticing, Empathising, Appraising and Responding (NEAR) to suffering by looking at workplace routines, communication, relational networks and leadership.
He does not endorse using AI to scan worker’s chats and monitor their brain waves. But the principle of using AI to promote wellbeing through less invasive means, he says has great potential: “As the world becomes increasingly AI mediated, we want business leaders to reflect on how these technologies, often viewed as dehumanising, might be used to promote human wellbeing.”
Reported by:
Hayley Jarvis,
Media Relations
+44 (0)1895 268176
hayley.jarvis@brunel.ac.uk