It’s something we’ve all experienced, and now researchers at Columbia University have devised a method to prevent rogue microphones from capturing our conversations. Interestingly, one of the use cases for their novel mechanism is to disrupt automatic speech recognition systems in smart voice-activated devices. “Ever noticed online ads following you that are eerily close to something you’ve recently talked about with your friends and family?” asks Columbia University in their writeup of the research. “Microphones are embedded into nearly everything today, from our phones, watches, and televisions to voice assistants, and they are always listening to you.”

No One’s There

Brian Chappell, Chief Security Strategist, BeyondTrust dismisses the idea outright. He told Lifewire over email that the main culprit in every story that points fingers at a device listening to our conversations is our inherently faulty memory.  Matt Middleton-Leal, Managing Director, Northern Europe at Qualys, told Lifewire over email that it’s only natural for people to assume their devices are following their conversations, especially when they get a recommendation for a product not long after having a conversation about it.  “However, this is not the case—the sheer amount of computing power needed to listen to everyone, all the time, on the off chance that you can recommend products in an advert, would be beyond what is available,” assured Middleton-Leal.  He, too, believes that the spooky recommendations are most likely based on browsing history and patterns within social media, which are less obvious. “There are also all the other times where you have a conversation and don’t get a recommendation—you don’t remember those!” said Middleton-Leal. James Maude, BeyondTrust’s Lead Cyber Security Researcher, also points the finger at our faulty memory. He told Lifewire that online advert companies have fine-tuned their algorithms to pick up signals for recommendations from all kinds of places, as well as from our interactions, including some that we might not have registered consciously. “Even subtle things like pausing slightly on an advert for canoes that catches your eye while scrolling through social media can trigger not only targeted adverts but also boring conversations about canoes with friends, family, and colleagues,” said Maude. 

Not Worth The Effort

Chappell asserts that virtually all smart devices with voice interfaces rely on a trigger word to begin processing speech. The saving grace is that this initial recognition of the trigger word happens locally on the device and not on a remote server over the internet. The local detection of the trigger word was driven by concerns over privacy.  “These devices are also experiencing a high degree of scrutiny because of the potential for misuse,” assured Chappell. But that’s not to say these devices can’t be compromised. Colin Pape, founder of Presearch, firmly believes that any system can be penetrated. “Most consumers have never experienced working with a security researcher and don’t understand the lengths hackers will go to penetrate a system,” said Pape in an email exchange with Lifewire.  He’s of the opinion that people should always operate under the assumption that all devices can be broken into and pause to think about what information they’re willing to give up.      “If you choose to own an Alexa or any other assistant device, it’s important to understand that the device doesn’t need to know all your information,” suggested Pape. “If there is something you prefer not to be broadcasted to the public, there are plenty of other ways to securely discover information or find assistance in day-to-day activities.” Chappell, however, thinks the fault lies elsewhere. “Particularly, in a day and age when people will happily give away most of their information for ‘free’ games or applications, subterfuge isn’t necessary to get valuable information,” he said. “A compromised device could be used to gather information, but it’s a lot of effort and [money] to provide targeted advertising.”