Why we shouldn't fear A.I.

Jamie Stevenson
London 51° 30' 23.112" N, -0° 7' 37.956" E" E

Fears aren't always rational. People are terrified of any number of things, from heights, to feet, to spiders (okay that one's rational -- they are essentially tiny monsters). It's no surprise then, that A.I. is earning itself its own set of doomsayers, with Elon Musk recently tweeting that we should be more afraid of A.I. than of nuclear conflict. But should we really be afraid of the big bad robots?

Fears around A.I. have existed for as long as the concept itself. As a race, humans are extremely fearful of anything that could take our place at the top of the food chain (watch any monster film, from Godzilla to the recent Life, for confirmation), and artificial intelligence is, according to some, a very real threat to our seat at the head of the table.

Musk described A.I. as "a fundamental existential risk for human civilization", which sounds a lot like something uttered by an unfortunate Skynet employee, shortly before being fatally proven right. His argument seems to stem from our own arrogance, arguing that we are too blinded by our own sense of self-worth to realise something else may be able to do our job, and better.

He's not alone, either. Stephen Hawking, famous genius and occasional TV star, has also warned of the frightening potential of A.I., and as long ago as 2014 told the BBC that, "The development of full artificial intelligence could spell the end of the human race."

Professor Hawking's argument is that A.I. will be able to evolve much faster than humans, overtaking us and redesigning itself into something far more sophisticated -- and capable of overthrowing us. But do we really need to consider a Terminator-type existence just yet?

The bright side

Given Elon Musk and Stephen Hawking's high standing in their respective fields, it would be easy to throw up our hands and embrace our fate as the beetles underneath the A.I. boot. However, not everyone shares such a pessimistic view of the future of artificial intelligence.

Mark Zuckerberg was one of the first high profile names to speak out against Musk's prediction, resulting in a vaguely amusing Twitter spat between the two, which lead to Zuckerberg's understanding on the topic being described as 'limited.'

Zuckerberg hasn't been the only person to question Musk's assertions regarding A.I., with Oren Etzioni, computer science professor at University of Washington, describing his comments as a distraction, stating that the "world needs A.I. for its benefits."

Indeed, further research from Toby Walsh, Professor of artificial intelligence at the University of New South Wales, surveyed 300 A.I. researchers, with the majority claiming that it will take around 50 years before A.I. gets as smart as humans. This hopefully gives humans enough time to formulate a plan to ensure that the threat to humanity is extinguished before it's even posed.

Perhaps the most worrying aspect of all this is how people may begin to fear, rather than understand artificial intelligence. There are genuine concerns about A.I. and its impact on our future – most pressingly, its effects on employment -- but these can hopefully be overcome with research and understanding.

HERE can attest to the rich potential that A.I. represents, and it could, in the future, change lives and make the world a better place. Even Professor Hawking has (very slightly) softened his view, stating that A.I. could be "either the best, or the worst thing, ever to happen to humanity."

It is our responsibility to ensure that the former is correct, and that A.I. doesn't become a campfire story told in hushed whispers, with utterings about murderous robots rattling around the public consciousness.

By trying to understand A.I., instead of fear it, we are more likely to see the best, rather than the worst of one of technology's most exciting new possibilities.

Topics: Artificial Intelligence, Features

Comments