Dive Brief:
- The possible threats from artificial intelligence technologies demand the attention of policymakers, researchers, engineers and end users, according to a new study from 26 technical and public policy researchers from 14 institutions, including Cambridge, Oxford and Yale universities, along with privacy and military experts. It was published this month.
- "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" predicts the rapid growth of cybercrime and the misuse of drones in the coming decade, as well as the unprecedented increase in the use of 'bots' to influence, among other things, elections, news agenda and social media. The 101-page report identifies the security domains of digital, physical and political security as most relevant to the malicious use of AI.
- The study makes four top-level recommendations: policymakers need to collaborate with technical researchers, researchers and engineers should make malicious threats more of a priority, best practices in research areas need to be identified and the range of stakeholders and domain experts should be actively expanded to discuss these challenges.
Dive Insight:
Retail information technology executives are all too aware that security is priority No. 1. They also know the challenges are getting worse, but how much worse? This new study paints a chilling picture of the future role AI might play in malicious activities.
Beyond attention-getting scenarios, like causing driverless vehicles to crash or turning commercial drones into weapons, there could be vast opportunities for large-scale, finely targeted and highly efficient attacks.
"We all agree there are a lot of positive applications of AI," Miles Brundage, a research fellow at Oxford's Future of Humanity Institute, told U.S. News and World Report. He is one of the report’s primary authors. "There was a gap in the literature around the issue of malicious use."
The idea of remotely commandeered drones has "really captured the imagination," Paul Scharre, another author of the report, told The New York Times. "But what is harder to anticipate — and wrap our heads around — is all the less tangible ways that AI is being integrated into our lives." Scharre has helped set policy involving autonomous systems and emerging weapons technologies at the Defense Department and is now a senior fellow at the Center for a New American Security.
Many people are focused on Russia’s hacking of the American electoral process – and in retail, on the numerous breaches of consumer data. But with advances in AI, all this may look like child’s play in five to 10 years.
For example, the report’s authors expect to see novel cyberattacks such as automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves, such as through adversarial examples and data poisoning.
One future threat pointed out in the study is the use of AI to launch and coordinate cyberattacks at a scale and sophistication that is currently unfeasible. Systems trained with machine learning will be capable of identifying the weakest targets, automatically evading detection, and adapting to efforts to shore up defenses in order to maintain an attack, summarized IT Pro.
Among the organizations participating in the study were Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; and the Center for a New American Security, a U.S.-based bipartisan national security think-tank.