Experts warn of ‘clear and present danger’ of AI terror attacks

Research into the future of artificial intelligence undertaken by a host of academic and industry experts has warned of a ‘clear and present danger’ that terrorists could harness the technology to carry out driverless car crashes and cyber attacks. They have urged that the law must be updated to tackle such threats.

A report co-written by 26 authors spanning 14 institutions, including the University of Oxford's Future of Humanity Institute, Cambridge's Centre for the Study of Existential Risk and Elon Musk’s OpenAI, concludes that with artificial intelligence improving at an ‘unprecedented rate,’ new cyber crimes and threats are emerging without warning. The report states that unless preventative measures are taken, attacks are expected to ‘significantly increase’ in the next five years.

The authors hope that their findings will convince governments and technology companies of the pressing need to collaborate on updating regulations, stating that there is good reason for ‘growing concern’ surrounding the use of AI and ‘its capability to do harm.’

SpaceX and Tesla CEO Elon Musk affirmed that: ‘There should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.’

Superhuman AI ‘will alter the landscape of risk’

Artificial intelligence’s exponential advancement of human capacity presents a host of challenges for law and policy makers.

Malicious applications of AI are evolving in step with the technology, leaving the international legal community struggling to keep up with regulating use and protecting members of the public from potential vulnerabilities.

Miles Brundage of the Future of Humanity Institute stated that AI’s potential for superhuman performance creates unprecedented levels of risk. ‘AI will alter the landscape of risk for citizens, organisations and states,’ he said.

‘It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour,’ Brundage continued.

Call-to-action for governments

The research highlights the damaging effects that malicious exploitation of AI technology could have on digital, physical, and political spheres.

Deepfakes - whereby videos are manipulated to produce convincing fake images, transposing one person’s head onto another’s body - could be exploited for political propaganda, while criminals and terrorists could harness surveillance technologies to further their intelligence. Autonomous weapons, drone attacks and sophisticated hacking strategies are all cited as areas of concern.

The researchers conclude that technology industry leaders must liaise with governments and law-makers to ensure that the latest advancements in technology are being appropriately provided for in national law and security measures, as well as to ensure that ethical concerns are properly considered in the release and regulation of new technologies.

Dr Seán Ó hÉigeartaigh, executive director of Cambridge University's Centre for the Study of Existential Risk, said: ‘There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.’

Sign-up to our weekly cybersecurity news digest

Want to read more stories like this? Our weekly news digest helps to keep you up-to-date with cybersecurity news stories relevant to the legal sector. Sign-up to our email list here.

Maximise your Law Society membership with My LS