Advertisement

Experts warn of serious international security threat posed by abuse of Artificial Intelligence

Researchers are warning that the misuse of drones and artificial intelligence could pose a seriou...
Newstalk
Newstalk

23.14 21 Feb 2018


Share this article


Experts warn of serious intern...

Experts warn of serious international security threat posed by abuse of Artificial Intelligence

Newstalk
Newstalk

23.14 21 Feb 2018


Share this article


Researchers are warning that the misuse of drones and artificial intelligence could pose a serious threat to international security in the coming years.

The 'Malicious Use of Artificial Intelligence' report, published today, is calling for new laws and regulations to defend against the risk.

A total of 26 experts from international institutions like Oxford, Cambridge and Yale have come together for the report.

Advertisement

Labelling AI a “game changer,” the paper predicts a rapid growth in cybercrime and drone attacks as well as an unprecedented rise in the use of 'bots' to manipulate everything from elections to the news agenda and social media.

The researchers warn that, unless global action is taken, terrorists, criminals and rogue states could be making widespread use of the technology within a couple of years

Malicious use

Irishman Dr Seán Ó hÉigeartaigh is the executive director of the Cambridge Centre for the Study of Existential Risk and one of the authors of the report.

He told Newstalk that a lot of the potential dangers are already beginning to take shape:

Experts warn of serious international security threat posed by abuse of Artificial Intelligence

00:00:00 / 00:00:00

“We are seeing in the research literature, people generating very realistic voice files that sound exactly like a person saying something that they didn’t say [as well as] very realistic images and videos.

“We know just from reading the news that, for example, Russia has employed a whole team of people to try to hack into various aspects of US political processes.

“These are not things that are in a dystopian Sci-Fi future, they are things that are happening right now.”

Sophia, a life-like humanoid robot is pictured at the UN headquarters in New York, 11-11-2017. Image:  Li Muzi/Xinhua News Agency/PA Images

International security

The report warns that computers could be used to mimic people's voices and hack into personal data; fleets of autonomous vehicles could be hacked into and made to crash and drones could use face-targeting missiles to locate victims.

The technology could be misused to target individuals, organisations and states – and could be used by authoritarian regimes against their own citizens – with the possibility for new levels of surveillance, profiling and repression.

Automated drones

Dr Ó hÉigeartaigh said the report divides the threat in to three categories, “digital security, physical security and political security.

“In physical security, drones and robotics are covered in detail,” he said. “Both in terms of somebody being able to use an automated drone to, for example, carry out an attack or even an attempted assassination - but also what might be possible a few years down the line.

“There are things that right now no human could do - such as pilot a swarm of drones.

“But the way the technology is going it will be possible within several years for an automated swarm of drones to operate in concert with each other which could open up a new type of attack.”

The Toyota prototype Concept-i, a self-driving car with artificial intelligence, 04-01-2017. Image:  Andrej Sokolow/DPA/PA Images

International response

Dr Ó hÉigeartaigh said the report is a “call-to-action for governments, institutions and individuals across the globe."

“The solutions to these problems are going to come from a range of different directions,” he said.

“Some of it is hardware and software; how we design systems; thinking about all the different ways that a system could be attacked and figuring out how to defend against it.

“Some of it is going to be legal; some of it is going to be regulatory.

“But one of the things that we are really trying to drive home with this report is that you need to have collaboration between different sources of expertise if we are to stay on top of this.”

The document calls for researchers and lawmakers around the world to work together to understand the potential of the technology – and prepare for its misuse. 

The 26 authors of the report are working at universities and tech organisations around the world – including the non-profit research firm OpenAI, US national think-tank the Centre for a New American Security and the Open Philanthropy Project.


Share this article


Read more about

News

Most Popular