AlgoRithm and blues: out of step with reality?

How many times have you logged on to your online banking or to pay your utility bills or some other form of adulting when a young, friendly, female (always a female) avatar pops on the screen introducing herself as a virtual customer assistant? Do you close the window and carry on, or do you, more optimistically, engage with her? As someone who does not warm to unsolicited offers for help from sales assistants in real life, my first reflex is to close the annoying pop-up. But when not given the option, I’ve typed my question in the chat.

“I’m best at answering simple questions. Please write your question using fewer words”

“…”

“You want to change your PIN. Answer YES or No”

“No”

A few more wrong guesses.

“I will transfer you to my human colleague”

Usually, the human colleague resolves my question in a couple of minutes. I don’t remember a single occasion where I would have got a useful response from an avatar. For however complicated algorithms may be, they suck at dealing with the complicated – which is what human lives are. The more important the issue is to me, the less inclined I feel to discuss it with a robot, let alone have the robot make the decision.

Going against standards

So, for this blog, I started thinking about different life-impacting decisions made by a computer model: the outcome of your pre-university studies? Being short-listed or even targeted for a job? My insurance premium? Or whether I will receive medical treatment? Welfare? A visa? How the neighbourhood I live in will be policed? How I will be treated in the justice system? Even scratching the surface of the impact of datafication sent me to an internet vortex gasping for oxygen.

Algorithms are neither good nor evil. They make decisions about the future based on past events and behaviour. They don’t make moral judgements or think about context. They like categories and standardization and pre-programming. I’m thinking about how nothing in my life has ever fitted into simple categories, which means that almost every time I encounter someone new I need to decide how much seemingly normal data I want to share about myself or be prepared for a long inquisition (“Wait, what…”, “You are…?”, “Where?”, “How?”, “Why?”, “Are you really?”, “No, but really, really?”): Finnish, a woman of colour, a Spanish name – or maybe Portuguese as Instagram algorithms have concluded – speaks English with a British accent, a single mum, not particularly struggling, but without a mortgage or any assets worth speaking of, a decent job, excellent credit score, a speeding ticket from two years ago, a couple of university degrees, good health but a couple of unexplained hospital spells.

It’s not hard to guess which of these personal data points count in my favour in which circumstances; which don’t; and which would be irrelevant when an algorithm decides on my fate. Not that I think humans are great at making good decisions, but I do worry about profiling, targeting and exclusion going unchallenged because ‘a computer said so’.

What’s in the address?

Something as normal as a street address is a relatively new phenomenon. Initially, people resisted the government collecting data on where they lived and then turning this into a number. But, soon enough this became a non-option for participating in society, decisions are made based on a street address and today for most of us the question is about the address we want to live in rather than the idea of an address itself. Unless you have a reason not trust the government or another powerful entity or person who might benefit from knowing your address. (A great podcast episode about this here.)

And that’s the essence of datafication, too. Who do we trust?

I will be addressing these questions over the next few weeks, sharing my reflections from relevant discussion events, reading news, academic articles by experts, and, I guess, inevitably, by whoever the algorithm pushes to my news feed.

I will be posting here on Tuesdays starting with thoughts on the datafied welfare state, followed by my take on a review of the digital lives of black women in Britain.

2 Comments

  1. Richaela

    Ducle, thanks for this very insightful post. Many people think that algorithms are fully positive, impartial and non-discriminatory, but they are also subject to bias because these systems are informed by human thought patterns (i.e., at many times non-culturally aware male engineers). There are many investigative news reports in the U.S. concerning how algorithmic data systems could reject applicants with ethnic first names or surnames (“African,” “Black,” or “Middle Eastern,” sounding names) from job application systems. Also, the discussion of “what’s in an address” is very relevant. I’m from NYC and one’s zip code/postcode could determine loans approval or if a child gains admission to a high ranked public school. The future of datafied societies will be very complex.

  2. Pingback: App-rehensive About the Digital State? - No Cap

Comments are closed.

Back to Top