This would be an interesting thread where insider knowledge would be beneficial. I’d love to hear what @slagle and @jody.albritton could bring in from the ST engineers.
Just those two come to my mind. I’m sure there are others. I have 4 Wemo’s and 6 Smart Outlets. Live logging shows lots of messages from them (constantly). All run using stock DH and apps (Wemo Connect).
I took my wemos down a long time ago because they were killing my routines. I do have 10 outlets and while they are noisy, I really don’t know if they would make a difference.
Other than chatty, I’ve had no issues with them. As for the Smart Outlets, the power usage should be reported in less frequent intervals (IMO).
I just did some testing of my own between CoRE and Smart Lighting.
CoRE was anywhere between 30-45% faster than Smart Lights to trigger an identical rule: turn on 3 Hue lights when any of 2 motion detectors in the same room detected motion, then turn them off once motion ceased on both.
Not only did CoRE win the challenge in terms of speed to execute the routines, in 1 of the tests with Smart Lights, the “turn off” portion of the routine failed for 1/3 lights, and I had to manually toggle the light off.
Testing was done in an ABAB fashion, with identical network conditions over the course of about 5 minutes. To stop the rules from interfering with each other, they were disabled on CoRE when testing Smart Lights, and I just straight up removed the Smart Lights rule when testing CoRE.
Literally the only part of Smart Lights that was better was the rule design interface. I could create the rule I wanted in about 15 seconds using Smart Lights, whereas CoRE’s UI is much slower.
That UI speed is the perfect example as to why a dedicated app should be faster than a do-it-all app. Being optimized for lights only, you’re skipping a lot of selections that CoRE has to take you through. Also, CoRE has to submit on change since the input is dynamic based on the capability selected. Perfect example Also, CoRE performance seems to depend on the shard your hub is running, with NA01 being the slowest, while the UK shard performing much better.
I’ll vote trump so I can have a reason to move to uk.
NA01? is a cloud processing cluster of ST’s I presume. ? Is one able to determine where they are running?>
If going to ide.smartthings.com does not redirect you elsewhere, you presumably are on NA01.
I’ll throw my two cents on this…
-
I use smart lighting for anything I can get to run local.
-
I still have 7 rule machine rules… Just because they have never failed me… Why change?
-
I use “keep me cozy” to run my thermostats, exclusively.
-
for almost everything else I use CoRE
-
also, I have scenarios which are a combination of CoRE and smart lighting… SL for the local parts, CoRE for the other parts.
-
I use smart lighting local for one reason only… And it’s not the speed… I have Minimote programmed locally to control things when the cloud is dead.
I like to build extremely complex and intricate scenarios, at first using rule machine, and now using CoRE. I do this to test the system. I try to find the weak spots, the failure points, the lags, what does what.
After almost a year with ST this is what I’ve found…
Irrelevant to the smartapp, local or not… The more complex the scenario, the more devices that are included, the slower it will run.
I keep my CoRE pistons rather simple and they mostly serve single purposes. Where speed is not a requirement I go more complex.
I have a piston that sends an sms when an outlet is not on at a certain time of day (it’s a power outage monitor for my freezer). That piston is almost instant.
I have another piston that is my goodnight routine… It does all kinds of stuff and is my largest piston. That thing sometimes takes up to 10 seconds to complete.
And if you’re using a third party… Hue… It’s going to be much slower.
Remember, an elephant is slower than a leapard, but the elephant can move the tree the leopard can only climb.
I don’t think there is any good way to test this. A couple methods I could guess, both flawed.
Using Logs - Issue here is that the logs are not consistent. You can’t depend on their timing.
Stopwatch - You would need to be very accurate and switch from a motion sensor to a open / close or a switch.
BTW - I hope you are right. That would mean that ST has done a lot to make the cloud faster versus local processing (or they moved Smart Lights back to the cloud).
I wouldn’t make the assupmtion that the Smart Lighting instance was running local. In contrary, I would rather assume the the test instance was runnimg in the cloud.
I’d rather not assume. It doesn’t really matter to me, but they were attempting to have a fact based discussion. Can’t make assumptions. Either way my point was it wasn’t a valid test.
My CoRE is faster than your CoRE!
Mine runs local! Responses times are 1-2 ms! It’s faster than Smart Lighting and dedicated apps! And I’m full of $hit!
I installed my CoRE on a roomba that is hard wired to Alexa and now it predicts what I want and does it before I ask! So not only is it local, but it turned the light on a minute ago!
Prediction is the next step in automation I want to achieve. I believe CoRE can do it.
True. My only valid test for me is when the hallway lights come on before I walk through. That’s why I left Wink. There is no point in automating the lights for my shadow.
I left wink because it turned the light on just as my fall down the stairs ended… I got to see where I was going to land!
Ultimately the true test (provided it’s reliable too). Unfortunately not necessarily quantifiable. Anecdotally, this is the standard that most people use, and if you notice a consistent discernable difference visually between methods, then there probably is a difference.