God is coming.
I don’t mean it in any fanatical, religious sense but in the true meaning of the word as explained in the Oxford Dictionary. “A superhuman being having power over nature and human fortunes”.
Until recently most decisions, financial, ethical or legal have always been made by humans. It’s been that way for centuries, generally because it worked and also because there was no alternative.
However, things are changing and the days of machines making decisions are already upon us.
Don’t believe me…When was the last time you applied for a bank loan and actually spoke to someone?
When did a car insurance quote last involve anything outside of a keyboard and mouse?
The reality is that in many traditional walks of life, computers – and by computers I mean algorithms – have replaced people in the decision making process.
Much of what I am alluding to here has really come into play over the last decade but ten years is a long time in the lifespan of technology. Society has arguably changed more in that time than it did in the previous fifty.
For those of you old enough to remember it, cast your mind back to the first home computer you ever owned. Was it a PC, a Mac, maybe a BBC Computer, an Atari 500, Commodore 64 or ZX Spectrum? Then think about how that branch of technology evolved over the course of the 80s, 90s and 00s. It changed but in a “stepped” fashion that never felt rushed or disposable. You still got the sense that what you were investing in was going to be around for a while.
Then Steve Jobs got up on stage in 2007 and everything changed.
It took 30 years for computers to make their way into the majority homes around the world and yet the smartphone did it in a third of the time and with arguably much greater success. In the UK alone over 90% of people now has at least one smartphone. In fact smartphone totals in the UK sit somewhere close to 90 million, which is over 20 million more than the entire population so a good proportion of us have at least two devices.
But the device is only the receptacle for the real marvel of AI and within this maturing intelligence is a system that is making more and more of the decisions for us.
An often used example is one associated with self-driving cars.
Imagine a scenario where a pregnant woman enters an autonomous vehicle on her way to the hospital for a routine check-up. Midway through the journey two bickering children break into a scuffle and roll out onto the road. The AI in the vehicle has to make an immediate decision, taking all of the circumstances into account.
Swerving to the left means hitting a vehicle on the opposite side of the road, in all likelihood killing all three of the passengers. Swerving to the right means missing the children but would take the car over the bridge, killing the pregnant woman and her unborn child.
Logic might dictate that the “best” outcome would be to take the car over the bridge, killing the woman and baby. It is the lesser of three evils.
But what if the woman was carrying twins?
Of course, this scenario is hypothetical but the reality of autonomous vehicles is probably only a decade away and when it comes, with nothing but AI behind the wheel (albeit AI that is more intelligent and responsive than any human will ever be) it will decide every step of that journey from beginning to end.
This raises the moral dilemma of what type of robot ethics need to be addressed when launching self-driving programs. It should not be forgotten that algorithms tend to reflect the “God” who created them and if there is any element of prejudice in that programming then it could make its way into the decision making process.
Does the government decide how cars will make the choices they will be required to make? Or is it the manufacturer or the consumer. And could a sudden switch of the political landscape also lead to a reprogramming of the entire system if, for instance, an incoming party decides to raise the driving age to 21 or even reduce it to 16?
The question is not only restricted to autonomous vehicles.
What about AI controlled weapons? Or what about a future with AI controlled doctors and nurses? Could we ever see a tomorrow where keeping a patient alive is not financially viable and logic favours death over life?
So many questions. So many “unknowns”.
One day man will perfect AI and on that day, he will move from master to subordinate, finally discovering the God science had told him for so long did not exist.
Which leads us back to the Oxford Dictionary and it’s definition of God. “A superhuman being having power over nature and human fortunes”.
I wonder whether, by that point, ethics will even have a place at the table.