The End of Groovy Has Arrived

We can automate our homes but the thought of simplifying finding an Edge driver gets resistance?

I am up to date on the threads requesting drivers and how to find them but I am thinking long term.

Users are going to have to sort through thousands of posts and it is not like the old groovy DTH where you can see the code to see if you device is going to work. There Edge drivers you will have to find the one post that says your device is included in the driver. A wiki is ok but something that is central and searchable would be ideal.


Agreed! Would you be willing to create such a thing?

The only thing I know how to build something like that in is Microsoft Access, LOL. We used to use it all the time for custom programs at work.

1 Like

Too little too late. Already moving to Home Assistant.


Are homeseer scene control switches supported dth and app abc?
Will WebCore be supported?

If I understand your question:

All custom groovy smartapps, including Webcore and Advanced Button Controller, will stop working when the groovy cloud is shut down. See the following discussion thread:

Replace Groovy with Automations—what’s your plan?

Homeseer switches will work with basic functionality if you let them go through the automatic transition.

Homseer support has told some people that they are working on edge drivers with advanced functionality for at least the series 200 and 300 devices. But no specific timeline has been given.

Contact Homeseer support for more information, or information about other models.

Hi, @martin.borg

Sorry for the late response. I was looking into this the past few days to see what can I find. So far there are just a few answers I can give you on this matter without any doubts:

  • We can’t guarantee that a non-WWST device will preserve all of its functionalities
  • We can guarantee that all devices currently matching with a non-custom DTH, will match an Edge Driver after the migration
  • Not all fingerprints have been added so far, to every Edge Driver

Additionally, I can say some more things but without any promises. Taking out some exceptions, it is most probable (but not 100% secure) that all devices matching a specific DTH will match their corresponding Edge Driver. This, together with the last bullet listed above may suggest that there is a high chance that your device’s fingerprint is going to be added sometime in the future to the zwave-window-treatment driver.

I am still trying to find out more about this. I will let you know as soon as I have some news.


Anyone else concerned about security of community based edge drivers? With groovy you could inspect the code for any malware intentions. Now a developer could inject a Trojan Horse and no one would know. Hoping someone has comforting news on this topic.


There have been some staff comments up thread about this issue, and some in one of the other announcement threads. Basically, they think they’ve got it sandboxed, but I don’t know the details. :thinking:

This one has me quite concerned @JDRoberts… Like to the same level of the VSwitch without a hub issue.

Two words. Legal liability.

Who is responsible for the damage of a bad actor injects code into an Edge driver?

I would put that question to Samsung legal (im almost positive I already know thier answer - use at your own risk, no responsibility for non Samsung provided code… Etc.)

In which case I can claim I do not have the ability to inspect the code. So…

Who is liable?

I won’t run unsupportable code. You know that. Legality is part of supportability. See where I’m going with it?


The terms and conditions you acknowledge when using SmartThings as a whole says they are not responsible for any losses or damages as a result of using their product. It’s not a life saving device, should not be used as such, etc.

So regardless of if it’s Samsung or community drivers, you’ve waived any rights to seek compensation for bad code, product availability or lack of support anyway.

1 Like

I get what you’re saying Corey - You are of course correct in that aspect I’m not thinking of me in this case or accidentally switching on or off something… It’s more the, if I can’t see it and KNOW I’m OK or instead have someone who is backing up what they say with warranty of some kind (Open source or closed source model). I can’t use it.

Groovy Smartthings, HomeAssistant, etc. Open source - I can see every bit of code therefore while it is on me to check (and admittedly I check and verify less than I should, but the point here is I CAN) and use at my own risk. So I do. A LOT.

Let’s assume we soon see a Pay to Play RBoy Edge Driver for Zwave locks. Closed source for obvious reasons, BUT - if I’m paying for it and RBoy puts money where their mouth is and supports their code if its broken or does bad. I’m still OK.

What I’m NOT ok with is trust without verification that all driver code out there is non malicious.

BTW - I would be OK with a voluntary link to GitHub to verify the source code of the LUA driver matches the code at the published link. If ST wants to be a real community player ST - OFFER THE LINK to the code in the app UI - if someone wants to chase the link and read it they can. It would only take one attribute of type string with a filtered URL… Would TOTALLY do that. That way it could be as simple as “If you want your code open - offer the link in your driver registration but it MUST be formatted as X and posted unchanged else your driver wont install and if you want it closed… Don’t”


Can you share any idea how the LUA code can be malicious?

As I understand, the drivers can only acquire any information or act on inside your networks (Zigbee, Zwave or LAN), but they do not communicate outside of that only to the Hub and SmartThings Cloud.

To use anything outside of that, you need a SmartApp and authentication (token) but those are not drivers.

(I have some ideas of malicious code, as search non-protected network drives and wipe them. Or turn on and off switches as high rate, but that would hit rate limits. Maybe automatically open a doorlock at a specified time, but that would show up in the logs…)

Otherwise I love when people go through parity comparisons.

My favorite is this topic from Reddit:

1 Like

I think they will get something like this as the ecosystem matures. Or signed drivers that you can verify match what’s installed.

When going to Hubitat I missed the fact I could look up the source for most of the built in drivers. HE’s stock drivers are all closed source which sometimes makes debugging more difficult.

Since I rely on my memory for tracking conversations (I use a voice reader, so it’s quite difficult for me to go back and scan old threads) and there have been SO many duplicative conversations lately, it’s getting quite difficult for me to keep track right now. My problem, but just wanted to mention it as I may miss more stuff than usual.

Anyway, there has been a longrunning (more than a year) conversation in the developers’ section of the forum on security and closed source releases. Including some staff comments.

@nathancu @csstup @jlv


Several possibilities.

  1. Standard denial of service attacks, such as malicious hackers have done against Nest previously. Anything which causes the hub to become unusable after a given time. Done for all the usual hacker reasons. The attack itself is the goal.

  2. “Malicious ex” attacks against a specific target. These are typically attacks against locks and security features, but can also just be blinking lights and setting off sirens. These can be done remotely for psychological effect or followed up with a physical incursion.

  3. “Ocean’s 11” attacks against a specific geographic area. These will be followed up by a physical incursion, usually for purposes of theft.

  4. “LAN Takeover” attacks. These are done for the purpose of getting remote access to other devices on the same LAN. the motive is often financial, including ransomeware.

  5. “Peeping Tom” attacks aimed at camera takeovers.

How big a threat any of these is to any individual household varies a lot. Most are at best minor annoyances, some are real. Malicious ex attacks where you are the individual target of someone who already knows the details of your household are probably the highest damage risk, but you probably already know what the likelihood of that is.


So why are these things opaque? I assume the LUA script(s) are compiled to bytecode, correct? If so, when/where does this happen? Again, I am assuming if installed via the CLI, the bytecode conversion is handled by it?

But there is this “Channel” installation method (which I don’t fully understand), where you register your hub from a website and the transfer “just happens”. Once again assuming what is transferred has already been compiled?

So the channel method is easier on the consumer, but if the dev makes their LUA source available, you could inspect it before installing via the CLI… right?

Or have I got this completely wrong? )c:

Edge drivers run on your own hub.

In order to get them onto the hub, you have to download them from a “channel“ which is in the smartthings cloud.

In order to do that download, you will follow a link that the channel owner gives you that takes you to a webpage which is accessing the smartthings cloud.

You sign into your SmartThings account on that webpage.

It then displays the channel details. You can subscribe to the channel and then select individual drivers that you want to have downloaded to your hub. ( download process can reportedly take as much as 12 hours, but it’s usually pretty quick.)

Once the driver has been downloaded to your hub, you will be able to see its presence through the smartthings app. But you don’t get to see the actual code.

In order for a developer to get a channel in the smartthings cloud, they have to go through some paperwork and sign up. But it’s not like an app in the Apple store: they are not required to have the code reviewed for malicious intent before it becomes publicly available.

So at no point does the code have to be available for the end user’s inspection. And even if there is code in GitHub that the channel owner says is what will be downloaded to your hub, there’s no guarantee that it will be the same.

Which is pretty much the same as apps on the Google playstore, if I understand that process. There are promises, but no guarantees that the promises will be kept. :thinking:

The lua interpreter running on the hub does indeed execute the bytecode form. The CLI does not compile to bytecode as part of the upload/install. The compliation appears to happen in the cloud (it binds to the rest of the framework during that step), and the bytecode appears to be downloaded to the hub for later execution. I believe this is correct.