Scientists warn that hackers could weaponize artificial intelligence (AI) to conceal and accelerate cyberattacks and potentially escalate their damage.
IBM researchers last month demonstrated "DeepLocker" AI-powered malware designed to hide its damaging payload until it reaches a specific victim, identifying its target with indicators like facial- and voice-recognition and geolocation. IBM's Marc Stoecklin said with DeepLocker, "AI becomes the decision maker to determine when to unlock the malicious behavior."
Meanwhile, the Stevens Institute of Technology's Giuseppe Ateniese has investigated the use of generative adversarial networks (GANs), which contain two neural networks that collaborate to deceive safeguards like passwords; he designed a GAN that fed leaked passwords found online into an AI model, to analyze patterns and narrow down likely passwords faster than brute-force attacks.
Said Ateniese, "We need to study how AI can be used in attacks, or we won't be ready for them."
From The Wall Street Journal
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found