Impact Lab


Subscribe Now to Our Free Email Newsletter
November 1st, 2016 at 6:41 am

AI: A five-point plan to stop the Terminators taking over

Boston Dynamics robots

Tech behemoths Google, Facebook, Microsoft, IBM and Amazon announced this week that they are teaming up to develop new standards for Artificial Intelligence (AI). It’s a much-needed move. Soon AI will change everything from warfare to our bodies. But we don’t want to become slaves to the robots, so how do we stop the Terminators?

1. Autonomous weapons

A great deal of money going into AI is invested by defence companies. We need to put a complete ban on autonomous weapons, at least until we know for sure that they can apply the rules of war as well as humans.

IL-Header-Communicating-with-the-Future

Current robots are notoriously bad at differentiating between an apple and a tomato, so would find discriminating between combatants and civilians hard too.

The BAE Taranis, an autonomous unmanned combat jet

2. Education for automation

If 50 per cent of current jobs are going to be replaced by robots, humans will need to learn lots of new skills throughout our lives to prevent becoming redundant. Training, retraining, sabbaticals, second, third, fourth careers – all that has to become the norm.

And while learning, we will need to become “STEM-savvy” – ie up to speed in science, tech, engineering and maths. Algorithms – the rules that machines follow – will control large parts of our lives, so we must learn how to control them.

3. AI transparency

AI systems increasingly make decisions affecting our lives – we need to know on what basis. In America, some parole decisions are taken on the basis of recommendations made by algorithms, and not even judges are allowed to know its reasoning, because it is the property of a private company. That is unacceptable.

4. The Big Red Off Button

Lots of work is going into the apparently simple, but in fact complex, field of turning things off. As soon as you give any machine a goal, even as innocent as making daisy chains, you give it the subsidiary goal of staying alive, because it cannot make daisy chains if it has been turned off.

So any reasonably intelligent system will seek a method to disable the off button. We must outwit it. How? It is the hardest technical challenge AI fans face. But there is no answer yet. If Google et al must solve any single problem, it is this.

5. Get the values right

If we can’t turn them off, how do we get machines to have the same values as we do? This too is complicated. Effectively we have to program in to each robot all of moral philosophy. But we haven’t worked out every detail of that philosophy for ourselves yet.

So we will need experts to define explicitly, in each setting (whether for medical robots in hospitals, or tax robots in a future HMRC), precisely what the moral code is in that environment.

Image credit: Boston Dynamics
Article via: The Telegraph

IL-Header-Communicating-with-the-Future

Comments are closed.

DaVinci Coders