Simulating IoT devices

On the new ST platform, is it possible to create virtual IoT devices and integrate them to my physical hub to control and get status notifications from them?

Can I do simulation of all the smart home system (including the hub) or I must have a physical hub?

No Hub is required, but you cannot simulate a Hub.

What exactly are you trying to accomplish? - you’ll get much more useful answers if you talk about your end-goal and use cases, rather than questions without any context…

2 Likes

You can use the Virtual Device Creator (Add a Smartapp -> +More ) and create a virtual switch or dimmer.

2 Likes

I’m working on a project at university. I live in Tunisia and, unfortunately, we have many constraints in e-payment to buy online, that is why I want only to do a simulation .

I am trying to think of a solution to ensure that a third-party app uses its permissions (given by the user at installation) without violating user security and privacy.
For example, if an Automation SmartApp has the permission to control the door lock, it cannot unlock it when location is set to Night mode since this could be considered as malicious action.

Any thoughts please?

This has already been answered by SmartThings and confirmed by researchers.

SmartThings has never claimed to offer “highly-granular” access control. Once a customer grants a SmartApp (basic SmartApp, IFTTT, Amazon, ActionTiles, etc.) access to a Device, the SmartApp has full access to all of the Device’s Commands and Attributes, unless you completely disable the SmartApp.

The product works as designed and intended. That doesn’t mean that having more granular control wouldn’t be a bad thing, but it is not common in the greater tech industry.

Take Android as a comparable platform, for example:

  • If you grant an App access to your Camera, it can use that Camera for whatever and whenever it wants - even if the App’s primary purpose is to scan barcodes.
  • If you grant an App access to “make phone calls”, you can’t restrict the app from making long-distance or spam phone calls.

Should consumers be concerned about this? At the moment, they should be much more concerned about the millions of effectively unvetted smartphone Apps (the above two examples are a fraction of the vulnerabilities!) than smart home platforms.

Android (and iOS) has steadily improved / evolved their access control options over the years.

Samsung has the opportunity to do the same, and has made some minor changes to the access control model in the new API (which is still, effectively, under development and subject to substantial change): https://smartthings.developer.samsung.com/


Rather than start with a flawed hypothetical scenario, I suggest you familiarize yourself with the exact level of access control that the Platform asserts it has.

Then you can test against those assertions; not ones you are making up.

Alternatively, you can take the perspective that consumers are unaware of the range of risks, I suppose.

1 Like

Thank you @tgauchat for taking time to respond to my questions, it’s a great pleasure :slight_smile:

Well, I think it is a bad thing, since a fine-grained permission model (e.g., give the app access to door lock but only under some conditions e.g., never unlock the door when mode=AWAY ) requires the user to carefully read prompted permissions which leads to annoyance and habituation, that’s why the current coarse-grained model is the most deployed in smartphone and smart home platforms, am I correct?

I think the opposite; since smart home apps have access to the physical environment and this may lead to physical damage if access is maliciously used, in contrast to smartphone apps that only have an impact on the cyber world.

My current understanding is that: SmartThings permission models is the same as the one used in smartphone platforms i.e, just like you said before: the user is the responsible of giving permission to the App. And the access of an app to device and data (through ST API) is controlled by OAuth tokens (i.e., permissions = scopes). So, what do you exactly mean by ‘level of access’?

I think the scenario that I’ve motioned (app unlocks door lock when mode=AWAY) is real; that is an attacker (using an app subscribed to mode change events and already has the access to the door lock) is able to know when inhabitants are not home to break-in the house. This scenario confirms that consumers must be concerned about SH apps more than smartphone apps.

Before SmartThings changed its platform, I was thinking of dynamically monitor installed SmartApps (i.e., at run-time) and inform the user if one app tries to perform a sensitive action (e.g., unlock door), and give him/her the ability to confirm/reject this action.
However, since apps are no longer executed on ST platform, I am not sure if this approach is still possible, as app logic cannot be controlled at run-time. Do you know, please, if permissions of an app (e.g., Webhook) can be controlled in a similar way with current API-based platform?

Of course we have an emotional connection with the sanctity of our homes, but there still is a much, much greater risk of “smash and grab” than any sort of cyber attack on home IoT. There’s no profit in attacks on homes, relative to what can be gained through phishing credentials via a smartphone App.

Plus, if you grant Location Services to a malicious smartphone App, that App can now tell the attackers exactly where you are located, trace your commute, and more. SmartThings itself doesn’t have microphone access (I think?), but more and more phone apps do.

By “level of access”, I mean granularity. Once a user grants a SmartApp access to a Device, they have access to all Attributes and Commands of the device. Once a user grants any SmartApp access to any Device, they have access to all Attributes and Events of the Location Object (including Mode, Routines, and, I think, SHM Mode).

We have to agree to disagree. If an Attacker wants to break into a home, they can already just observe the comings and goings visibly and would be already “in-place” to break in via a smash-and-grab; rather than relying on a complex SmartThings platform attack.

I’m not 100% sure what you are describing, but in both the Groovy (legacy) API and the new API, the ability to insert a monitoring and control layer (for detecting sensitive actions and offering the ability to confirm/reject) is impossible for a 3rd Party - unless all SmartApp developers agree to route their access through that layer - which, of course, malicious SmartApps will not do.

SmartThings themselves would have to make this an integral part of the Platform and API. And they are not inclined to add this tremendous degree of complexity and overhead for the miniscule gains in security.


Yes - SmartThings should (and does) have various initiatives in place to watch for malicious activities, and to steadily increase the granularity of access control offered to Customers, but I believe that your case scenarios are unrealistically hypothetical at this time. Commercial companies have a limited tolerance for allocating resources to the prevention of extremely low probability attack vectors.

I assert that it is more important to identify a realistic problem, then determine statistics for the actual occurrence and predict the likelihood of growth and impact of the problem, before proposing a solution which severely impacts user convenience and complexity.

1 Like

I have an issue in creating a Directly Connected Virtual Device in Dev. Workspace.

I successfully created a devices profile (Develop -> Devices -> Device Profile) but when I tried to select a Device Profile (Tools -> Virtual Device) to register a virtual device, I couldn’t find the DP that I have created (No options).

Any help, please?

Hi @tgauchat, I have a question please.

Given an automation App that turns on light if there is a motion, can the App use the token generated for the given functionality to send an API request to turn on the light device without receiving a motion event from ST API?

I suppose this App is a Webhook that can also lunch API calls not just a Lambda function that only can make calls upon receiving events.

I’m sorry… I don’t understand your question.

I just want to know if a trigger-action automation App can use the its OAuth token to make an API call (action) without receiving the triggering event, i.e., does the scope of the token allow this?

I got an answer in this thread OAuth Token Misuse Possible, but I’m not sure of it.

Ummm… I can sorta guess what you’re asking but I’m still not sure. What the heck do you mean by a “trigger-action automation App”?

How about giving me a very simple but specific real-world hypothetical example, please?

Trigger-action app means that the app takes an action upon receiving a triggering event. For example, an app turns on light (action) after receiving that motion is detected (trigger).

So my question is,: does the scope of the OAuth token generated at installation time (after the user grants permission) allow the app to turn on light without waiting for motion to be detected? i.e., send a call to ST API without waiting for an event from it.

If this is true, then we can consider that this as a misuse of authorization, since the user grants permission to the app to only do motion-light automation not to be able to turn on light at any time.

No - nowhere does SmartThings claim that the scope of the authorization is limited to the extent you are foisting upon it - ie, to the extent that might be “ideal”, but is far too complex for a user to manage.

I return to the smart phone example. Various Apps request permission “to make phone calls and send text messages”. They might claim to need this in order to reach your emergency contact or to send alerts to designated contacts.

Neither Android nor iOS allow the user to restrict the scope of that permission in any further way, including:

  • what numbers can be called.
  • during what time ranges a phone can be made
  • and most definitely not: after which specific “triggering events” a phone call can be made.

I see where you are going with this, but you are expecting far too much of any currently available platform.

SmartApps can be written in a Turing Complete language. In other words, they can implement essentially arbitrarily complex logic.

The logic isn’t limited to “if the moon is full, then turn off the lights”. If all SmartThings were “IFTTT” simple, then your security enforcement model would be applicable.

But a SmartApp could be “if the moon is full and everyone is at home, but it is not after midnight, and the good-night routine has been run, and there has been no motion on detectors a, b, c, and the weather is clear, and my schedule has no meetings before 8am tomorrow…”… with all these conditions being parameters set after installation by the user…

How can a security mechanism possibly verify that the SmartApp is doing what it has presented to the user, without crippling the the ability for SmartApps to be arbitrarily complex?

The current legacy method? SmartApps go through a manual review to check for “obvious” abuses before they are certified and published.

The Play Store method? Apps are machine checked for obvious abuses, plus pattern matching and various degrees of AI, but still, essentially superficially.

The feasible (but costly and impractical and still imperfect) method: Apps are run through arbitrarily complex simulations (that are not predictable by the developer) to confirm that they don’t take unexpected actions. This is known as “sample auditing”. Since there are infinite scenarios, only a random sample can be tested.

How are the samples generated? Just like in all automated testing, the developer specifies assertions which are both human and machine readable.

Even if we assume that sufficiently sophisticated automated testing can be conducted, and even if this testing can be done when the app is live

The assertions of an arbitrarily complex app must be the same magnitude of complexity as the app itself, and thus impossible to be expressed in any practical manner for the user to understand and make an informed decision on whether to grant or deny the authorization.

1 Like

Note that I’m talking about the new ST API not the legacy Groovy-based system.

I think the manual review is not useful anymore, since apps can be programmed in any language chosen by developers and executed on third-party servers, and thus developers can completely change their apps’ logic even after passing through the review and installed by the user, am I correct?

In addition, ST did not define, yet, an apps review system for its new API.

The manual review was of very limited value anyway. I doubt SmartThings ever put in any effort to try to catch anything other than the most very, very obvious security or performance issues.

Yes.

Correct. There is no review process in place for new API SmartApps yet.

  • I doubt they will ever bother to look at developer’s code, even for certification / publication.
  • Thus the analogy is less like a Smartphone App, and more like a website.
  • Once you grant permission for a website to access your camera, pop-up notifications, etc., etc., you cannot control what or when that website will use those authorizations.

I don’t see any possibility in within the next 5 years in which SmartThings makes any attempt to limit the power of authorizations similar to the scenario you describe.

If a user grants lock/unlock access to their door lock to a particular developer/vendor via a SmartApp, then the user must trust that the vendor will not abuse that access. There is no practical way to make this more secure without increasing user complexity beyond comprehension. And this is no different than Alexa and Google Assistant: They initialially solved the problem by just denying all access to devices which claim to be “locks” (for example); and then got pushback from their Customers who willingly trade security for convenience. This is a very reasonable trade.

1 Like