The growing use of “algorithmic management” — artificial intelligence (“AI”) tools for managing, rating, and monitoring employees — presents some unsettling risks and tricky ethical questions. What can business leaders, who want to improve employee productivity and retention, do to ensure these tools are used wisely?
Machines have bad habits too
On the face of it, algorithmic management technology has the potential to help organisations improve employee morale, productivity, and safety. They can monitor employee sentiment, identify non-compliant behaviour, and measure performance. But, when you cut through the clever copy, the picture is murkier. Tools that can detect “a person’s true character” in automated interviews, or monitor remote employees’ facial expressions and gestures through webcams, for example, raise serious ethical questions about the employee-employer relationship and individual autonomy.
“So-called algorithmic management has spread from gig platforms and logistics warehouses into a range of other sectors and occupations, hastened by the pandemic.”
~ Sarah O’Connor, The Financial Times
The picture becomes murkier still when you consider the social risks posed by AI. Biases that are entrenched in patterns of language use are deeply embedded within machine learning tools, leading to reports of “striking gender and racial biases” that perpetuate human prejudices. And if the insights generated by these tools are considered to be more robust, precisely because they are data-driven and free (on the surface) from human interference, the dangers are even greater. One organisation’s absolute trust in technology has already led to “the most widespread miscarriage of justice ever seen in British legal history.”
And then there’s the potential for these technologies to exacerbate the problems they’re claiming to solve — in particular, employee morale and retention. If an employee knows their employer is keeping tabs on their every eye movement, they might spend less time taking screen breaks and more time working. But does that create a working environment — and a corporate culture — that is going to help their employer win the battle for skills in the long-term?
Getting into the algo-rhythm
Questionable use cases, bad workplace culture, and entrenched biases are, on their own, cause enough for sleepless nights. But many leaders will want to engage with these tools anyway — because, let’s face it, the promise of better employee morale and higher productivity is too alluring to resist.
So, how can leaders use these tools successfully and avoid the negative fallout? By preparing themselves, and their people, to use them wisely.
In practice, this means investing in fundamental human processes and capabilities long before investing in algorithmic management — building critical thinking skills, establishing robust decision-making processes, improving internal reporting, and shaping a culture of accountability.
-
First, the organisation needs to target the tools at the right problems, in the right way, and make it very clear what purpose they are serving and how they are to be managed. Is the tool generating insights to support decision-making, or is it making decisions? And, if it’s making decisions, is there room for human oversight and how will this oversight be exercised? It’s important these tools have a clear scope, and feed into a well-oiled decision-making process.
-
Second, those who receive the insights generated by these algorithms — or oversee their decisions — need to be able to understand them. This starts with a basic understanding of the technology itself. Business leaders don’t have to be experts in AI, but if they are entrusting these tools with aspects of their decision-making, they need to understand how they do what they do.
Then, those using the tools need the skills, resources, and confidence to challenge what they are told, to work out the implications, and choose the best course of action. They need to be supported to make the right decisions, quickly.
-
Third, decisions need to be communicated effectively, both up and down the chain — so that others understand the process that has been followed and have an opportunity to challenge it. What do you do if something goes wrong — and where does the buck stop? This is essential for building trust in both the systems and the people who are using them. And it’s even more effective when the organisation’s leaders are role models in accountability themselves — demonstrating that it’s OK to report bad news, so the organisation can learn from what’s not working and refine its approach.
“Building a culture of reporting and accountability throughout an organization means there will be a far greater chance to spot and halt, bias in data, algorithms or systems before it is perpetuated and becomes harmful.”
~ Bernard Marr, Forbes
Algorithmic management tools could help solve some of today’s most pressing workforce challenges. But in choosing to deploy such powerful technology, leaders must also commit to bearing the great responsibility that comes with it. If they don’t invest in human capabilities and processes, these sophisticated tools will only ever be blunt instruments.
The views and opinions expressed in this article are those of the authors.