Dangerous rules that an app should avoid

Hi all,
I would like to certify a smartapp based on some rules. As all we know, given device capabilities and inputs at installation time. A developer is free to develop any logic in the app and in some cases an adversary can inject some code to change the logic of an app.

Therefore, my question is that: Can we declare a set of rules that shows the dangerous or unwanted behavior of an app? For instance, “no app should have this rule when presence sensor is inactive open my door” or another rule “no app should get my location and send SMS message or PUT request (as an app can exfiltrate my location).”

What do you think? Can we extend this list to have more secure apps?

I don’t know how you can possibly do this, there are way to many cases where some people use an app differently or use presence detectors or contact sensors in a different way.

A simple case, if a flood sensor during Christmas, I used my flood sensor to let me know when there is NO water, so I can add water to the tree, so you could really never develop a set of rules that would be worthwhile at all.

If you are worried about this, don’t install SmartApps by any other developer, just use your own or SmartThings stock SmartApps.

5 Likes

Does it scare you that SmartThings employees can access and see all your devices linked to the hub? They can remotely push updates as well as access an in depth log of events for your hub.

Thanks for the reply @c1arkbar, not the employees but the programming mistakes of a developer or adversary that might exploit the code “somehow” and change the behavior of the app.

I have to agree with @thrash99er on this. I like the openness of the system. Like they said just only install from trusted or write your own.

Yeah, I’m not sure if you quite grasp the Smart App in the SmartThings context, it isn’t like an app on your phone. I’m just not sure where you are going with this, the entire ecosystem for SmartThings is developed for openness, so that flies in the face of what you are trying to accomplish.

I see your point. Sorta like an anti-virus thingy right? I use an app called Host Pinger where it goes and get the access token and end points of devices and monitors ip addresses on your network…Seems like thats all the information needed to hack into my ST account but I have zero fear of that because the app rocks and many use it.

However, I WOULD and WAS be very skeptical of Apps that have you to log in with your ST credentials. I guess you have to embrace the openness of the ST system. What I’ve found is this community has many talented people and I believe it would be hard for anyone to pull a fast one over on it.

On the other hand, I would be pissed if I wanted to create a rule that made total sense to me but was “blocked” because ST decided to limit what I could and couldn’t do with “My” rules…

I can kinda see some sense in the idea, however I don’t think it will be feasible to implement.

My first thoughts are that any automated checking of groovy code could fairly easily be circumvented because Groovy allows dynamic declaration of variables and methods, so it would be possible to obfuscate any malicious logic, unless of course one of the rules was not to allow any dynamic declaration of variables and methods. :wink:

Also, who is going to write the tool that is certifying app code against the rules we come up with? Unless you’ve written the certification tool yourself, how are you going to certify the cerification tool?!? :smiling_imp:

The main mechanism I can think of for this to happen is where a user has linked their account to smartApp or DTH code in a 3-party github repository, and they pull an update without reviewing the changes. Is this what you are referring to, or do you envisage other mechanisms? In any case, how would the code-checking occur, and at what point?

In my view, right now, the only practical way to guard against malicious code is to read and understand all third-party code before you install into your IDE.

@rontalley, yes we can think of an anti-virus mechanism. @zcapr17, I believe this platform will mature, and more security and privacy problems will arise. The tool can be used either by employees before deployment to verify the app security or user can be notified about the consequences of an app misuse. Consider an app using a web service, the server may be hacked and sending some malicious code to the app through the web service, which can be treated as a variable in an app and completely change the logic of an app.

Hold on guys, I’m going to need more tinfoil for this thread.

I wonder if Trump or KellyAnne have a Samsung Microwave :robot:

4 Likes

This is a good point. Code injection via web services is a very real threat right now, not just in the future. Wordpress closed one such vulnerability in February, for example.

However, I think this is a wider issue for the SmartThings platform as a whole, not just for user SmartApps that happen to be hosting web APIs, as the whole platform is using cloud-based web services.

I would be good to get a comment from SmartThings @Staff_Members about how actively they are reviewing the security of their web services. Ideally, there should already be some kind of active monitoring in place that will detect and block code injection attacks… (?)