What if it’s not that?
Back in 2013 I suggested in a piece for Imperica, called I'm not responsible that,
"…computers [won’t] become intelligent and sentient. I contest that it’s simpler than that: we will simply forget how to do the most practical of things and rely, instead, on a dumb machine to do things for us."
And I wasn’t joking. Since then, much about our way of life is being outsourced. How far could this thought experiment go? What if we embrace the idea that every decision we need to make is based on what a computer tells us to do? While that might sound daft at first glance, consider the train driver who is told to continue on a green light. That’s a computer making that decision. Yet who’s culpable if the train crashes? What if we removed all trace of human error by taking away the responsibility to make that decision. Completely.
Many years ago, I spent time learning the art of making bread. While I never chose to take it up as a career back then, I got to do work experience at a laboratory that was – and probably still is – responsible for creating new products. One of those was Kingsmill bread. Recipes are carefully created, with ingredients measured to 0.01g. Scale that up, often with unskilled workers, and things go wrong.
In one such plant bakery, there were two jobs that required minimal skill. One was really banal: when this light comes on, hit this button. Every 30 minutes the operator was given a break, so mundane was the task that on the night shift people fell asleep and apple pies didn’t get any apple inside them.
We are seeing apps that tell parents when to put sunscreen on their children, the route they should drive, the photos they should take, the people they should meet, or marry, and a huge variety of other decisions that were once determined by humans. Our first instinct in any situation is to search on Google, ask Siri, or rely on a review on a website from someone we don’t know (and who may not actually exist). Bots are, some say, the new apps.
A new framework
How far could this go?
There appears to be no end in sight. User experience design is removing friction to the point that any element of decision making has been killed; why think about whether you want that pizza delivered in a Uber (or drone) and just press this button now so you can get on with whatever you were doing before hunger had the gall to interrupt you.
Maybe we’ll see Soylent-as-a-Service?
On a serious note, it’s not inconceivable that we’ll begin to see a defence in court come down to a decision that a jury can’t determine with reasonable doubt to have been premeditated or, to some extent, to have been made by a machine. How do you sentence a server to life? Can a datacentre be held to account for corporate manslaughter? To that end, jurors might be screened via an algorithm; heck, IBM’s Watson might be called to serve on a jury.
This is not in the realms of science fiction.
It’s clear we don’t have a legal system that’s set up to deal with the kind of technological leaps that we have made in the past 30 years. Yet, we have to collectively decide what to do next, as a global society.
As we begin to turn off low-level activities from our brains and allow a rack in a datacentre to decide things for us, where do we draw the line?
Is this the true singularity?
Let’s just say that we outsource every decision. Every. Single. One.
Get up, breakfast is chosen for you based on nutrient intake so far that week. You’re routed to work via your phone, which could see you change trains five times, or be woken earlier (when there is congestion). Your TV shows you the most popular shows only, based on what your Facebook friends are watching, or talking about. Or how well you did at work.
There will be no way to opt out. This will be life.
And if you break a law by trying to get back to some level of self-determination, even a small one, what happens then? No need for a court system; you’ll have persistent cookies attached to you that prevent access to some email, or the ability to log on to your favourite porn site, or perhaps deciding yourself when the heating comes on because the temperature has dropped. Your door won’t unlock; your groceries are delivered but they only contain what the computer deems is suitable, given your criminal circumstances.
Ever tried to contact Google support? Good luck with proving your innocence. Right?
Oh, silly me. I forgot: culture.
Much has been made of this recently – it’s what makes us human, it separates us from our Neanderthal ancestors and, so the argument goes, certain religious factions. But even that is getting homogenised.
For example, right now I can pick up a guitar and create a song. I can write lyrics. Or stories. I can draw. And yet, much of the culture we see each day is created by a computer, or at the very least determined by one. The future will see each note I sing auto-tuned, the lyrics I write scrutinised for how well they test against hit singles of the past; my stories are shared, remixed and reposted until they are no longer mine, or any individual’s work – then they are translated by a machine and changed again.
The advertising I see – once a mirror for popular culture – is reflecting the decision of a computer as programmatic takes over. Each year, the same idea is regurgitated and shared again as if new. And we all get excited about it.
This is happening today. Right now. Mostly because we have this weird idea that the web contains the best of culture, so we’re glued to it. It’s where your average advertising creative goes for inspiration. We’re seeing memes used to sell confectionery, Instagram-style filters on travel programmes, listicles for game shows.
Add to this the stark fact that, as humans, we’re fairly terrible when it comes to verification – and when we are looking for it, we only care about the first page on a Google search. Or we ask a question via our favoured social network, or maybe on Quora so that the answer we like the most gets upvoted.
Daily Mail comments are a good indicator of that strategy.
It can’t all be bad, can it?
Sure, some good can come of this. Let’s flip it around, then.
If we remove human intervention and outsource not just our responsibility but also the decision-making that comes along with that, what do we get? Everyone has access to the same level of education, for free; there is no need for physical money, or cards (which results in a reduction in low-level crimes); there are no more road traffic accidents. Imagine it: arguments will be settled with the press of a button. You shrug and move on. The list is endless.
Life would be fairly bland and asinine, though. For the majority.
Modifications will be available, to those who can afford them. For example, you don’t want to ever see someone again? Digitally erase them and even if they’re standing next to you they’re invisible. They simply don’t show up on your radar. Access to an archive of human experiences from the early 21st century will keep us entertained. Same as it ever was.
Whichever the outcome is to be based on giving up our responsibility – and therefore the thing that, I would suggest, makes us human – it’s going to happen. As I said before, computers don’t get more intelligent, we as humans just get a little more stupid each passing year.
Welcome to the Singularity.
Now, go share this cat video.