Alexa, Have I Been Hacked?

15 Sep 2017 | -

Amazon AlexaWith breaches against large corporate and small enterprises happening on a daily basis, the onslaught of ransomware, data breaches and DDOS attacks that threaten our professional lives, makes it easy to be distracted from the risks we face closer to home. With the increase in smart home devices and the growing digital world of the Internet of Things, cyber is starting to make its stamp on our personal lives.

Siri, Alexa and Cortana are voice assistants that have become household names. The ability to find information quickly, do your shopping, and play your music all with simple commands are just a handful of a long list of benefits that make them the latest tech must-have. But like most new, shiny tech, the security of these voice assistants is being tested. Recent research by Zhejiang University has shown they can be hacked using ultrasound commands.

Voice assistant statsHow does this happen?

“Dolphin” attacks as they are known, involve sending audio at frequencies higher than a human can hear. These kind of attacks rely on the ability to send audio commands to a user’s phone or voice assistant device without the user knowing or interacting with the device. However the exploitability of these attacks are fairly low as they require the attacker being in close proximity to the target as well as the device being setup in “Always on” mode for voice which is generally disabled by default.

What can be done?

Going forward there is a number of ways this attack can be prevented from both a software and hardware perspective. The quickest way of preventing these kind of attacks is to disable audio being picked up by the devices microphone over a certain frequency, this could be through changing the threshold on the device itself or by deploying a software update that disregards any audio that isn’t within human frequencies. In all future devices, attacks like these should be considered in the design process and a whitelist orientated approach should be undertaken to only allow known paramaters such as frequencies in this scenario.

As we move more towards voice automation and AI integration these kind of attacks are going to come more to the forefront and prevelant in the future as systems become less reliant on human interaction and more on sensor input and artificial learning.