Why Algorithms In Your Inbox Are Bad News For Everybody

Artificial intelligence looks poised to take another bite out of the recruiter’s role. As Politico reported recently, Finnish company Digital Minds has launched a product that scans job applicants’ private emails on behalf of prospective employers, in order to assess organisational fit.

The founders of Digital Minds position themselves as digital psychologists offering “a revolutionary assessment tool” that’s not only more accurate and precise for employers, but also easier and faster for (fully consenting) candidates.

Their product, in Politico’s words, “skims a job applicant’s private conversations to compute an assessment of his or her psychological traits […] based on the so-called Big Five personality traits — openness to experience, conscientiousness, extraversion, agreeableness and neuroticism. [This] gives the potential employer an assessment of whether the person in question would be a good fit at his or her company.”

It follows a 2017 study by the University of California, Berkeley, which found that language used in internal company emails could successfully predict an employee’s career trajectory. So is this simply another heel-click step along the road to the promised land of cheap, reliable, unbiased automated hiring?

Your answer to this question ultimately depends on how happy you are to blindly hand over decision-making power to opaque and unaccountable algorithms.

It’s hard to avoid the exploding trend to take decisions out of human hands and entrust them to algorithms. Amazon and Netflix decide what we’ll like to buy and watch next. We let Google algorithms determine the answers to our questions. In finance, 85% of foreign exchange trading is conducted by algorithms alone. In Italy, healthcare treatment is allocated via automated data analysis. The list goes on.

But the concern for many is that all of these decisions are based on at best undecided criteria, at worst completely inscrutable ones. And as Politico puts it:

“The opacity in exactly how [algorithms] make those decisions is what poses the biggest risk from these new technologies and approaches.”

Indeed, there is a growing bank of evidence that algorithmic decision-making is profoundly fallible.

Reuters reported last October that Amazon had to scrap a prototype AI recruiting tool after it was found to be discriminating against women. Likewise, a 2016 ProPublica study uncovered a racially biased algorithm at use in US courtrooms.

There are enough reported instances of algorithms going rogue to give even the most burnished AI evangelist pause for thought. John Jersin, vice president of LinkedIn Talent Solutions, acknowledged to Reuters that:

“I certainly would not trust any AI system today to make a hiring decision on its own. The technology is just not ready yet.”

Regardless of whether or when the technology is ready, the question remains whether it should ever be trusted.

In the brilliantly titled comment piece Algorithms Are Already Making Decisions For Humans, And It’s Getting Weird, Dionysios Demetis of the University of Hull concludes that:

“As new boundaries are carved between humans and technology, we need to think carefully about where our extreme reliance on software is taking us. As human decisions are substituted by algorithmic ones, and we become tools whose lives are shaped by machines and their unintended consequences, we are setting ourselves up for technological domination. We need to decide, while we still can, what this means for us both as individuals and as a society.”

These decisions – about the scrutability, accountability and unintended consequences of automating machines like Digital Minds – are the ones we still have control over. Now is the time to exercise that control.

Those in the know, share. If you think your network would find inspiration in this post, we’ve made it really easy for you to tell them using the LinkedIn Share button below.

Categories:

LinkedIn