The first time we talked about smart speakers was around 2018. At that time, they hit the market in different shapes and sizes, from well-known brands like Amazon, Google, and Apple. Even Facebook, nowadays Meta, decided to join the pack and thrived, until 2022, when the company ended its Portal Smart Speaker device project.
Anyway, today, just like then, speakers have traction among users, they now have innovative looks, and some come at a bargain price. It’s true that smart speakers’ users have been benefiting from hands-free control and completing tasks more efficiently, but this all has come at a cost – your privacy and security!
Smart speakers’ privacy risks are real.
Pay attention to the following!
What exactly are smart speakers?
Now, when someone mentions “smart speakers”, you can easily picture them. They’re not a new concept anymore. But let’s go back to it’s the beginning to better detect the privacy risks by using smart speakers.
Smart speakers are voice-activated wireless speakers with built-in artificial intelligence (AI) virtual assistants. They respond to voice commands and “assist” you by executing different tasks. For instance, setting alarms, answering questions, playing your favorite music, controlling smart home devices, or getting weather forecasts.
Smart speakers require an Internet connection (Wi-Fi) and specific technology to understand voice commands. They are equipped with all the necessary microphones to pick up spoken commands from a distance. Their design also includes the ability to filter background noise. They can be controlled through physical buttons, a mobile application, or voice commands. Smart speakers allow hands-free interaction with other smart devices at home (security cameras, lighting systems, thermostats, etc.) and they can be paired with more smart speakers to get multi-room audio playback.
Currently, the most popular AI software in the market includes Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana, Samsung’s Bixby, and Apple’s Siri. You can find these assistants on many different devices, besides speakers, laptops, desktops, phones, tablets, wearables, smart home devices, etc. But let’s keep focused on speakers, where Amazon’s Echo and Google Home dominate the scene.
How do smart speakers work?
Smart speakers work through a combination of software technologies to receive and process voice commands and hardware.
Wake word detection
The smart speaker is permanently listening and waiting for a wake word that can be simply its name (software’s name, like Alexa, Siri, or Google). Once you (the user) wakes it, the device gets ready to follow your commands.
Correct functioning is vital to reduce the possibility of “false positives” and “false negatives” with wake word detection. A “false positive” means the smart speaker can wake by mistake, confused by a sound that was not its exact trigger word. While a “false negative” refers to cases in which the smart speaker does not respond to the wake word. Another challenge here is to properly identify the differences in pronunciation. Environmental, human, and technical factors can make it complex for devices that have limited CPU power.
ASR, or automatic speech recognition, is a process of translating your voice commands into text. Once the user wakes the speaker, the device records your words and sends this audio via the Internet to a voice recognition service. This service works with different algorithms that allow the system to become more familiar with your specific speech patterns, and use of words. Shortly said, it learns how you speak. Companies say this is good to improve their services. However, we kinda of think of it as a red flag of smart speakers’ security risks.
NLP or natural language processing
This is another key process to interpreting the meaning of your commands. Now, the voice command is already text so it is analyzed in terms of context, grammar, and syntax to get exactly the user’s objective.
Once the command is fully interpreted and understood, the powerful servers of a cloud service look for the data requested (weather forecast, songs, news, etc.). Users’ commands can also involve actions, in this case, the support of the cloud will focus on the execution of the task (turning on the coffee machine, adjusting a thermostat, turning off lights, etc.).
The Cloud is a major player in how smart speakers work. Speech recognition and natural language processing are essential processes that commonly take place also within it.
Smart speakers’ privacy risks
From their arrival to the market and still, now, smart speakers represent privacy and security risks. Many cases have been public and documented already. Let’s start with some of the smart speakers’ privacy risks.
“Fortuitous” smart speaker recordings
Permanently-on microphones at your home or office mean risks and ethical concerns. Smart speakers rely totally on the use of microphones. They need to hear you in order to operate. We already explained how smart speakers work, so you know they record you to process your command. These voice files can easily be used for identity theft. This crime can go to the next level by using your voice to get a loan or credit.
Besides, remember that “false positives” truly happen and can lead to smart speakers’ privacy risks. It is enough that the smart speaker mishears its activation word for it to start recording (a private conversation, a phone call with your bank, whatever is happening around) and sending to another entity (sometimes a criminal one) all the information it gets. Extortion and all types of fraud are possible.
Big companies like Amazon and Google say their products and the data of users are safe. Let’s naively believe them, no matter the scandals they already have been involved in. Still, this does not mean that if they, or criminals, want to deliberately obtain and record data from you, they can’t. Here is where a risky gap opens for users. Those smart speakers in your home can easily be someone’s prying ears.
Nowadays, you can mute microphones to prevent smart speakers’ privacy risks which is an option you should definitely take advantage of.
Human access to your voice-recordings storage
Every time you use the smart speaker, a new audio file is stored, but remember that these files aren’t just stored locally, they are sent to a cloud service to be deciphered. In theory, only a reduced number of humans have access to these recordings, and just use them for enhancing different aspects of the service, like voice recognition.
Reality has shown, this data has also been used in the past to create advertising users’ profiles. And this only increases the number of people having access to your privacy (marital status, health condition, finances, and shopping preferences, to say the least).
This is exactly why you should care about adjusting the default settings of your smart speakers. There you can define, at least to a certain point, what can and cannot be done with your recordings; store them only locally and don’t let the manufacturer access and collect your data “by default”. Change the settings! Avoid smart speakers’ privacy risks!
Smart speakers’ security risks
Hackers can control your IoT
Yes, not only eavesdropping but also hacking is on the list of smart speakers’ security risks. Smart speakers are IoT or the Internet of Things (learn what the IoT is). They are often hacked by criminals with different purposes and that can mean vulnerabilities for your home and security. Their design is focused on having quality microphones and sound, rather than improving their security defenses. Hackers can gain access to your network and personal data (data breaching).
They can infect your smart speakers to recruit them (hijacking) for a botnet to execute DDoS attacks (What is a DDoS attack and what is DDoS Protection?). Since users usually control other smart devices with smart speakers, the infection can reach them too. Criminals can access those devices’ information (security cameras, smart locks of your doors, windows, or garage, information sent to the Internet, etc.) and control them. Through the use of malicious software, they can spy on you.
Smart speakers can assist you to make purchase. It is not a safe practice, but still users do it. This gives the smart speaker access to sensitive data (passwords, usernames, bank card details, etc.). If hackers reach such information, they could easily make purchases on your behalf.
The aim of phishing attacks is to steal your private data (learn more about the Phishing attacks here). Smart speakers can be used to trick users into giving away passwords or personal data through voice commands.
Yes, devices, for sure, can beimproved upon in terms of security and lower the risks of exposing your data to outsiders, yet they will collect data and use them or sometimes even sell them. Because users’ data has become pure gold for both companies (all types) and criminals. Therefore, smart speakers’ privacy risks are a real worry. For the companies producing the devices, the smart speaker is two things: One, the best spy they could ever dream of, and two – a convenient marketplace at your home. They can easily sell you products on the spot! Knowing every user’s move allows them to create a very detailed profile with a dreamed accuracy. They clearly won’t give this up easily because it means profit. Now, imagine the scary but profitable possibilities this offers to criminals!
Last but not least, smart speakers still can be fooled. People who have a similar voice to yours can trick the devices. They can interact and see everything you have shared with it. Some cheaper speakers can even be activated by an audio recording of your voice.
So, the big question: Should you be paranoid about smart speakers? At this point, yes! In the future, still yes! If you value your privacy, don’t buy a smart speaker ever!