Amazon Alexa Skills May Pose Security Threats

Users of Amazon Alexa “skills” may have more to worry about than simply recalling all the one-liner jokes these devices can perform.

Leave a Comment
Amazon Alexa Skills May Pose Security Threats

Virtual assistants like Alexa and Cortana are becoming larger pieces of the conference room experience.

Amazon’s Alexa-enabled smart speakers have increasingly found popularity in the consumer market, and even in select retail and corporate environments. One of the selling points the company has pushed are the Amazon Alexa skills.

As a recent post on The Verge pointed out, many of the over 100,000 skills are “one-note” novelties which are completely forgettable. But apparently, they also pose threats to users’ privacy.

A large-scale study of vulnerabilities in Alexa skills recently identified concerns in the vetting process Amazon utilizes to confirm each skill.

More details from The Verge:

  • Activating the wrong skill. Since 2017, Alexa will automatically enable skills if users ask the right question (otherwise known as an “invocation phrase”). But researchers found that in the US store alone there were 9,948 skills with duplicate invocation phrases. That means if you ask Alexa for “space facts,” for example, it will automatically enable one of the numerous skills that uses this phrase. How that skill is chosen is a complete mystery, but it could well lead to users activating the wrong or unwanted skills.
  • Publishing skills under false names. When you’re installing a skill you might check the developer’s name to ensure its trustworthiness. But researchers found that Amazon’s vetting process to check developers are who they say they are isn’t very secure. They were able to publish skills under the names of big corporations like Microsoft and Samsung. Attackers could easily publish skills pretending to be from reputable firms.
  • Changing code after publication. The researchers found that publishers can make changes to the backend code used by skills after publication. This doesn’t mean they can change a skill to do just anything, but they could use this loophole to slip dubious actions into skills. So, for example, you could publish a skill for children that would be verified by Amazon’s safety team, before changing the backend code so it asks for sensitive information.
  • Lax privacy policies. Privacy policies are supposed to inform users about how their data is being collected and used, but Amazon doesn’t require skills to have accompanying policies. Researchers found that only 28.5 percent of US skills have valid privacy policies, and this figure is even lower for skills aimed at children — just 13.6 percent.

The Verge recommends users of these devices comb through their unused skills and delete those which are not critical to the user.

Read Next: Amazon Introduces Pro Portal for Integrators

While none of these issues have a direct attack associated with them, it seems AI assistants have a long way to go before being trusted with mission-critical applications, such as those seen in commercial installation projects.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!

About the Author


Adam Forziati is the former senior web editor for Commercial Integrator and MyTechDecisions.

Commercial Integrator Magazine

Read More Articles Like This… With A FREE Subscription

Commercial Integrator is dedicated to addressing the technological and business needs of professional integrators who serve the small and midsize business market. Whether you design, sell, service, or install… work on offices, churches, hospitals, schools or restaurants, Commercial Integrator is the dedicated resource you need.

No Comments yet. Be the first to comment!

Leave a Reply

Your email address will not be published. Required fields are marked *