Frustrated by the state of things

…a new feature to disable previously working features?

3 Likes

Used to be I’d agree with you, but now, facing the reality of it, I’d be more inclined to say that the introduction of bugs to old functionality is never intentional, hard to detect, and sometimes hard to fix. You can go on all you like about regression testing and all of that, but you’re dealing with an untestable system (separate topic, why that is). It isn’t possible to test a covering set of functionality with any confidence that it’s the right covering set. It’s an NP complete problem.

By that I’m referring to testing in an ST system. Asynchronous, event driven, distributed, complex…

Sure… but the use of an “in-the-wild” stream of volunteer Beta testers that are a substantial (but fractional) subset of Customers would significantly increase the probability of discovering and resolving bugs before a full rollout. Even a week of pre-release in this Beta stream would subject the system to a wide range of diversely random testing, proportionate to the sample size used.

I’ve done pre and post installation testing for sensor nets with all of the above characteristics.

You can’t catch everything, but you can catch a lot. Use case-based regression testing would catch a number of the areas that we’ve seen come through from ST in the last three months.

So I can’t give them a pass on that basis.

Take the oauth issue for the UK. Universal, reproducible, obvious, for an advertised feature.

If they launched not knowing the problem existed, that’s one testing philosophy.

If they launched knowing it existed, that’s another.

Either way, they killed the fish.

There is zero confidence in your statement that it “would significantly increase the probability”, mathematically. If the probability is very low to begin with (due to complexity), then throwing more testing at it isn’t going to move the needle. It’s the same dilemma the company faces in the first place, how much testing is “worth it”. Obviously, they could go bankrupt testing and still not have come close to having confidence that it all works. Tough management call, to say the least.

So you spend however much it is on testing, you deploy when you reach some level of confidence, and then a storm descends on you from things you missed. Fun. What do you change the next time? More testing? Everyone here thinks they know the answer – they don’t.

1 Like

I totally disagree that some in-the-wild Beta testing couldn’t make a big difference.

As @JDRoberts points out (and I agree), a significant number of the bugs we discover post-release are ones that were, well, extremely obvious and easy to discover post-release.

It only took a few days before this bug was discovered. It could have been fixed before the release instead of 23 days … and counting.

I didn’t say that it “couldn’t make a big difference”. All I said was that mathematically there is no way to know if you are right or wrong. You may well be right. The problem that ST management faces is still the one I describe: It’s a judgment call whether or not to do something like a beta testing program. You have your opinion about it, and evidently someone in ST management has a different opinion.

It is impossible to say who’s right, you or ST. But, we don’t have to, do we? :grinning:

Oh that’s easy … I’m always right. :angel:

4 Likes

Not one of the press reviews actually use ANY of the cloud based HA systems. They play with it for a couple of days at their office and write their puff piece.

We don’t have real journalists anymore out there, all of them are bloggers that simply write what they are told to write.

4 Likes

Yup… It’s a lot easier to copy a press release than actually research and test.

This explains why all the gadget crowdfunding campaigns have dozens of media logos in their description sections, and all those publications write the same thing in the same tone:

This gadget will change our lives and is shipping next month. It’s available for “pre-order” today on Kickstarter / IndieGogo!

Always neglecting to mention that crowdfunding is not a store, that only prototypes have been built, and that similar gadgets are a year behind schedule…

2 Likes

And that you are going to get your product in 6-12 months. And that it doesn’t guarantee a good quality product. And that you’ll most likely won’t find good support on it. And that there are definitely going to be delays with the product production. And… well I can go all day on this subject.

2 Likes

Agreed.
I have backed around 2 dozen projects on Kickstarter and Indiegogo and so far only 3 have shipped on time (hoping for a 4th to make it this month!). But @tgauchat is right about crowdfunding not being a store (despite the fact that a lot of the projects these days are treating it that way).

1 Like

I knew I jinxed it yesterday saying my stuff was working. My heat failed to turn off again this morning. What a waste running all day with nobody home. I just can’t rely on this junk product.

1 Like

At this point, it’s not a set and forget system.
I still check up on my stuff once I get to work to make sure my mode changed and everything turned off accordingly. SmartTiles makes it easier at least.

I can’t check the status of my AC because it is an IR sender with no feedback ZXT-120. So no way to know if it worked.

If I have to go check it every day I might as well just toss Smart Things and simply push the button every morning. Home “Automation” my buttocks… :wink:

Alexa is responding fine this evening but my hub is ignoring any commands even from the app. Guess I will try a reset.

1 Like

My Hub V1 is working just fine. I had a very momentary outage (a couple minutes?). No manual reboot or any other intervention required.

In other words: Being an early adopter of a piece of hardware from an unreliable company isn’t a great consumer decision.

This is not directed at you, Ron, but rather the hundreds of people that endlessly day in and day out begged for the release of Hub V2 (1817 posts in just one of the Hub V2 begging Topics…):


And later in the same Topic I wrote:

May 18, 2015…

I would rather heartily recommend patience though, even after release: Don’t rush to be an “early adopter” of the new platform.

If you’ve got time to be a Beta tester (or post-Beta … Gamma (?!) tester), then early adoption is a great service to the Community and hopefully you have a smooth experience with personal benefits.

But there will be a period of time after Hub V2 is in the wild that it could be less stable than Hub V1 (gasp!). Existing Hub V1 customers will have to go through a migration process to Hub V2, and it would be sad to see a lot of folks jump into this and then a rash of Community postings saying “Help! I wish I could rollback!!!”.

:crying_cat_face:


BTW: I’m always right. :wink:

2 Likes

Had not even a momentary outage on the locally processed automations using the v2 hub. Can your v1 do that?

2 Likes

Nope…

But… I’ve not had to spend the hours I would need to migrate from Hub V1 to V2, and the hours I know that you have spent figuring out how to debug the poltergeist behaviors you experienced due to Smart Lighting local vs cloud execution quirks. I haven’t touched Smart Lighting. Don’t want it … and without Hub V2, I’d get very little benefit from it, so I’m glad I’m avoiding the complication.

My Hub V1 based SmartThings overall outages rate is about the same or better than pre-Hub V2 was released.

2 Likes

Oh c’mon, you will have a migration tool in few weeks, so that wouldn’t be a problem. And I know you like to pull all nighters to test things, so why not joining the real fun. You may even be selected amongst the fortunate few to volunteer your time to a group of elite beta testers :smiley:

Oh Bobby … I hope we meet one day so we can have an entire day of mutual laughter! :laughing:

You forgot the little “TM” marker after “in a few weeks”! Migration tool was explicitly and with no qualifications promised by SmartThings’s Twitter representative as well as the CEO himself: “By the End of The Year”. … Does that need a “TM” too?