OPINION: Checks and balances are vital if artificial intelligence is going to work for us, rather than the other way round.
OPINION: Checks and balances are vital if artificial intelligence is going to work for us, rather than the other way round.
Fresh from my utterly failed attempt to wrestle back control of my email account I’m grappling with a new tech threat in these first few weeks of the new year – the robotics revolution.
Put about by cultists, I mean enthusiasts called Singularitarians (one of whom was manning my ISP’s tech help desk), I’m now worried about how artificial intelligence is going to develop capabilities beyond what we mere human beings are capable of, and use this power to take over the world.
I’m not sure whether singularity is more a threat to humankind than global warming, but it didn’t do anything to lighten my mood over the festive season – ‘merry Christmas, here’s something shiny, bright and new to get really worried about’.
The problem with developing really top notch AI is that it might just end up controlling us.
Imagine trying to have conversations with people who were controlled by little machines? Sound familiar?
It’s hard to accept the view that robotic technology or automation is likely to replace millions of jobs in the future, including mine, triggering massive social upheaval and an unemployable class.
As frightening as that thought is, I’m more concerned about angry robots taking control of my world; that is, AI that can ‘think’ and seemingly works to its own agenda .
My only and fervent hope is that the futurists have got it wrong. I mean, they got it wrong with the Jetsons, so surely there’s the possibility AI will be put to work for good, not evil.
There are loads of tech chores that any reasonable person would happily hand over to a robot, angry or otherwise.
Take my inbox and it’s cheery collection of 5,346 unread emails.
I’ve already tried to clear it out, but found the task so exhausting and utterly demoralising that I gave up.
Cloud storage means there’s no pressure on me to clear my inbox, so it now resembles a chaotic, digital filing cabinet (and I’m too cautious to delete anything lest I need it later).
Rather than further expose my troubling work (mal) practices, let me turn to the next tech challenge malevolent robots could control.
It’s on the same track, but I would kill for an automated secretary to sort out my digital correspondence – robot to read it, filter it and present it in priority order.
It would be reasonably simple to create an algorithm to sort the rubbish from the genuinely important messages.
Messages from people who have never spoken to me before, who can’t spell my name, who think I’m a man or have a genuine interest in party plan products could all be ditched.
The same filter could be applied to voice mail.
There are good security arguments for such a system, especially as we careen towards our blockchain future.
The only conversation I had that was more frightening than my recent chat with the Singularitarian was a tale of a software virus and online ransom.
The terrible chain of events was set off by the simple act of a staff member opening a random email and downloading an attachment.
A bad decision, but easily done now ‘big data’ has transformed our inboxes into giant junk mail repositories.
This unleashed a virus that disabled all software, locking the business in question out of its project data as well as its accounts and payroll – absolutely every system was disabled.
This left all those in the office, let’s call them the un-artificial intelligence, unable to do their work.
The instigators of the virus offered to disable it but only if the company paid a bitcoin ransom, equivalent to about $5,000.
The IT team went to work, but after several days it ran up the white flag, admitting it couldn’t find any solution and it might be best for the business to pay the ransom amount.
And that was what this small business did. The software was unlocked, security was upgraded and staff briefed on the importance of not opening unsolicited emails.
I’m not sure this is a problem that super-intelligence could solve or invent all on its own, but like the very worst ideas it’s only possible due to the powerful combination of technology and human deviance.
Ironically, this gives me hope that there’s still space in the future for data-limited, inbox-challenged Luddites like me and my kin in the future, even if it’s only to teach super smart robots how to play nice.