UPDATE: Recent SmartThings User Experience & Platform Performance

Same here, False alarms even after system had been disarmed. Too many double notifications, very annoying since I have BigTalker speaking all notifications…

1 Like

Power user to me in context of article is anyone using a custom app or device handler.

Actually, I think this is what SmartThings uses to define a “Developer”… As in:

20,000 Developers

I am working with devs directly on this one. No need for extra reporting.

I am happy with more than 99% reliability

in last month I had only 2x fails with SHM.
1x before platform fix it didn’t disarm and cant be disarmed, so I removed sirens.
1x 1day after fix, sirens removed, it was just beeping push messages on phone.

I have 70+ devices (mostly zigbee) and 120+ rules (lot of smart lightning)
29 local devices and only 9 rules are local
some cloud 2cloud ecobee3, echo, harmony, nest protect…
Many custom handlers and many custom smartapps

everything works, except those 2x SHM fails.

how do you count 99% anyway?
because if you have 100 devices 1x fail/day it will be still 99% reliable yet annoying.

1 Like

How? Well, I will quote JDRoberts

You removed functionality, by disabling your sirens - a supported function of SHM to achieved your perceived reliability. Isn’t the underlying system still broken? is it not broken every day until they fix it?

For me… smart home is a convenience and a hobby. If I have to describe it is as annoying, it is a abject failure for my purposes.

1 Like

Who else, except me being labeled a “conspiracy theorist” :slight_smile: , has started wondering about the many recent posts of (and I’m being generous here) 95+% up-time? So let me throw it out there to be devoured: are they moles, plants, set-up, what-ever phrase you want to use, for them to show the “reliability” of ST… No accusations. Just wondering out loud.

Walks like a duck, quack likes a duck, must be a duck.

J

1 Like

I don’t think it’s a conspiracy, I just think it’s people using different ways to define “reliability.”

As an engineer, I use a six Sigma MFOP definition for consumer goods. Maintenance-free operating period. Defined in days. That’s pretty typical for consumer products.

So for a dishwasher or a toaster or a streaming media box, or a home security system, an MFOP of 95% would mean the consumer didn’t have to do any maintenance on it for 95 days out of 100. And that would be considered a pretty low quality product.

If the toaster went out of service once a year, that would be 99.7% reliability (364÷365). Still a little lower than what you’re looking for, but closer to acceptable.

A typical consumer product has a two-year maintenance free operating period, or 99.9 % reliability (729÷730).

Everything gets a little more complicated if a product has regularly scheduled maintenance, like an automobile, so some calculations won’t include those as long as they are only required on the schedule that is available at the time of purchase. Like I said, there are a lot of different ways to look at reliability.

But in terms of things that consumers buy like thermostats, or smart phones, or streaming media players, that don’t have required scheduled maintenance,it’s almost always done in terms of days without problems. Not successful transactions. Because each of these could easily have more than 100 transactions a day, But if they fail once a day every day, consumers will rightly regard them as very low-quality.

I do see a lot of people who try to calculate reliability as a percentage of transactions completed, but that’s not likely to correlate well with consumer satisfaction.

FWIW.

3 Likes

Most cloud platforms also have to adopt a different style of calculating reliability as well.

Here’s a good article on how Google does SRE.

I read this from time to time as a refresher.

1 Like

There are some good points in the article, but I would argue that it’s still one that will be easily misread by people who think you just calculate a percentage of the successful events. And that’s not what the author is actually talking about.

There’s a huge difference in customer satisfaction between a cloud service which occasionally runs a little slow and a cloud service which occasionally fails altogether. And a lot of what the author is talking about as service failures are really QOS issues (quality of service) where the end-user is able to complete their task, but just not quite as smoothly as the set expectations.

I do agree very much with this:

If 100% is the wrong reliability target for a system, what, then, is the right reliability target for the system? I propose that’s a product question. It’s not a technical question at all. It’s a question of what will the users be happy with, given how much they’re paying, whether it’s direct or indirect, and what their alternatives are

In fact, I think that’s the key to any measure of reliability that you choose for a consumerfacing product/service. How unhappy are the users going to be? What alternatives will they have if there is a system failure, or at least a QOS degradation?

If you’re standing there in the dark and a siren is going off and there’s no way to quiet it except taking the batteries out, The customer is going to be much more unhappy than if a light which was supposed to come on at 95% is only coming on at 90%. Both are system failures, but with a very different degree of customer dissatisfaction for most customers.

I’m also not sure the article is really talking about cloud services as being distinct. I would tend to argue that instead it’s discussing products/services which have variable QOS. I agree that that’s true of most cloud services, but I don’t think it’s the cloud that’s as important as the variability.

But what do I know? :wink:

The main thing is that there are a lot of different ways of measuring reliability but it’s important for every organization to make sure that the measurements that they are using are also predictive of customer satisfaction for each specific market group.

3 Likes

Since the update, I have had a very high reliability percentage.

My SHM failed to disarm, I believe, 1 time. Not a major issue for me as I don’t have a siren. I just get a text and have to acknowledge the intrusion.
I have a device that’s acting up, but I am 100% sure it’s the device and not ST.
I only use 1 routine and it’s run via Rule Machine when I arrive, when I leave, when I close my door at night, when I open my door in the morning, and on-demand via SharpTools widget on my phone.

No more headless Rule Machine rules having issues.
All cloud connected services are working (Alexa, IFTTT, Harmony, Hue). SmartTiles dashboard is responsive and accurate.
GCal is working 100% since adding my GCal items to Pollster.
My SHM is mostly functional (outside of that one instance where it failed to disarm, despite my mode changing to Home via presence). It changes from Armed (Away) to Armed (Stay) when using Thinking Cleaner.

Temps are accurate. Motion is accurate (I can even tell where my Roomba is from my work desk just by watching my SmartTiles dashboard).

3 Likes

My system is 100% reliable. I know those that want to beat on Smartthings do not believe me.

I have 55 physical devices. I have many simulated devices. I use Rule Machine, Blink camera manager, nest manager, and many other smart apps.

I am happy and I am continuing to expand my system.

I hope @bravenel comes back and updates his wonderful tool!

Agree with this 100%.

I will add that when you sell a security system that is professionally monitored, the reliability requirement necessarily sky rockets. We aren’t talking about the convenience of lights being manipulated anymore. We’re not talking about an intrusion warning device. We’re talking about a system designed and sold as a product that consumers are to understand to be a device, a system, a service, they are using to protect the lives of their family and the contents of their homes.

3 Likes

Which belt are you?

I base my reliability rate upon my satisfaction rate… Today, my system is 205% reliable… Tomorrow… It will probably be different… Lol

1 Like

What is “Thinking Cleaner” ?

I don’t want to speak for @bravenel, but I am pretty sure that the reason he left is that he is one of the original group… you know… the ones that want to beat on Smartthings. What a meanie.

Thinking Cleaner is an add-on for a Roomba that allows it to operate on your wireless network.
It can then be integrated into ST (or operated via IFTTT using the Maker channel).

2 Likes

Whining whiner. Wait, am I in the right thread? :slight_smile:

2 Likes

I don’t consider Bruce one of those people. When he left the database unstable. I don’t doubt that and @alex admitted so.

The system updates have fixed a majority of those issues. I am sure there is some messed up data that needs to be setup again. The stuff is working and the database problems that wrecked Rule Machine data is gone.

The ones I am talking about are the ones that scream and don’t read or believe the posters like me that have a stable system.

There are problems with the shm. Those are a majority of the complaints now. I am sure the Smartthings team will get that resolved soon. They fixed the database issues…

1 Like

So how is it that, despite all that, you have 100% system stability?

Did you not have smartthings throughout that? Did you start measuring after they resolved it all?