The ethical challenges of big data personality profiling
A recent BBC News article (Personality tests: Are you average, self-centred, role-model or reserved?) came up for discussion in an online coaching forum recently.
These moves are simultaneously exciting and terrifying! Let’s take the Facebook example. The insight from the data won’t be from the fact that you “like” kitten pictures or Liverpool Football Club – it will be from your interactions. What kind of things do you comment on? What’s the tone of your comments? Humorous? Sarcastic? Aggressive? Supportive? How does that change based on who you’re interacting with? What time of day it is? How busy you are?
Now imagine that all of your work email correspondence is analysed, and a similar profile built up. Every micro decision you make when you look at your inbox – read and delete, flag and ignore, reply immediately, forward on. The words you choose when you reply. The individuals which trigger procrastination every time you have to reply to them.
Users of Gmail and Linkedin can already see that these platforms are starting to offer suggested responses to messages on your behalf. They’re pretty rudimentary at the moment, but if the machine learning is taking place across every message you’ve ever sent, calibrated against every message that has ever been sent on Gmail – those models will start to get very much better, very quickly. Just look at how much things like speech recognition and machine translation have come on over the last few years – the science of inferring meaning and intent from text is a very hot area right now.
For those who are familiar with 2001: A Space Odyssey, I don’t think we’re too far away from “Look, Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill and think things over.” – imagine if that were to pop up when you were typing an email?
My biggest concern is ethical supervision. I work with several clients who are implementing these technologies – for fraud detection, to monitor call centres (“your call may be recorded for training and quality purposes…”). These data scientists are overwhelmingly white, heterosexual, elite-educated, affluent males, mainly in their twenties and early thirties. The models they produce are incredibly complex – it can be very hard to understand what they’re doing. Unintended consequences:
https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
https://www.pcmag.com/article2/0,2817,2357429,00.asp
As coaches, we reflect. We take our decisions, actions and approaches to supervision. We feel anxious or guilty when our interventions don’t work. We have a conscience. We understand the power we have to change the behaviour of others, and we take that power very seriously. When we’re handing the tools for behavioural modification to almost-incomprehensible algorithms coded by extremely monocultural groups of people, we need to be very, very careful.
One final thought – 1 in 5 brides met their spouse online (https://www.prnewswire.com/news-releases/only-1-in-3-us-marriage-proposals-are-a-surprise-engagement-ring-spend-rises-according-to-the-knot-2017-jewelry–engagement-study-300552669.html). With Tinder, this proportion will only increase. So, the algorithms are already starting to breed the people…