I recall a few months ago in May, 2016, there was a report of security researchers finding out ways to access smart locks connected to SmartThings, and even programming in new pincodes and texting them to an attacker.
Were any of these issues ever addressed? The article seems to suggest that some of the issues are structurally part of the SmartThings system.
“There are a few reasons to doubt the assurances. First, they make no mention of either of the underlying design flaws identified by the researchers. And second, they gloss over the fact that at least one app that passed review and was available in the SmartApps store already made attacks feasible. According to the researchers, the design of the SmartThings framework was a key contributor to that threat. So far, Samsung has provided no details on plans to fix it.”
This attack is only possible by stealing your OAuth token. To get that token you have to have it compromised, either through phishing or revealing it some other way. If you keep that token secured, you cannot be hacked. The article does this by redirecting to a “fake” login page to get your OAuth token. They could just as easily grab your username and password with that process.
The changed process by SmartThings was to force the review/approval of all published SmartApps using OAuth to ensure their security adherence. OAuth is required to allow cloud to cloud connectivity. Because OAuth is available to the community to develop their own SmartApps, this attack will always be viable if your token is stolen or if you add compromised code to your IDE.
1 Like
tgauchat
(ActionTiles.com co-founder Terry @ActionTiles; GitHub: @cosmicpuppy)
5
Honestly, it is much more subtle and complicated than the security researchers implied.
They implied a vulnerability that, frankly, SmartThings never claimed to protect against. That leaves them in a grey area of what consumers should rightly or wronglyassume about a system. Never assume, right?
A good analogy was/is:
How responsible is Android (Google) if a user installs an arbitrary 3rd Party App from the Play Store that turns out to be malicious?
What if the Android installer program listed all the “permissions” the App has and the user simply ignored them (e.g., Android will inform you that an App has access to your microphone, but will not tell you that the app intends to turn on your microphone to record your private conversations to share them with the NSA).
What if the user sets the “Allow Apps from non-Play Store” flag and one of those Apps turns out to be malicious?
What if the user downloads the Android Developer Kit and pastes the source code for an App into the Android SDK and loads it onto their phone?
To the best of my knowledge, SmartThings has fixed #1 and #2 by implementing strict code review of published Apps, and partially addressed #3 (which was never a “bug”) by restricting installs that “looked” like they were official.
SmartThings users are still permitted to paste arbitrary code into their API/IDE Account - and “nobody” wants to lose this ability. So while item #4 is a vulnerability, the responsibility for this lies 100% on the user.
For further concerns, I recommend emailing: Support@SmartThings.com who will give you their official answer that addresses the aforementioned study report.
#5 is one I don’t get the hype over. You are pasting code into an editor and running it. Of course bad things can happen. It can happen on any system, Windows, Mac, Linux, etc. In either case pasting an unknown script into a command prompt carries the same level of risk.
Users should know where their code is coming from. ST could help by finally launching a real App Store with vetted and approved apps. That’s what needs to happen to fix this.
Edit: Not sure what’s going on with the old text. I can’t seem to make it go away.
1 Like
tgauchat
(ActionTiles.com co-founder Terry @ActionTiles; GitHub: @cosmicpuppy)
7
They already have one – it’s called “Marketplace” and is the 4th tab in the SmartThings App.
I don’t mean to come across as being offensive, but if you think that’s an app store and that they are equivalent, or even in the same class, you seriously need to get around more!
I wouldn’t call what Apple and Google do “vetting”. You can easily get an application with serious security flaws approved in either store. They do a cursory review of your apps against some guidelines more concerned about user experience and making sure they get their cut of in app purchases. They will scan your code using automated tools that will flag common security issues, but they in no way “vet” that your app is secure. That is still the responsibility of the developer, though they do go to great lengths to mitigate the impact of insecure and malicious apps through proper segregation.