I would expect this has been asked before but I could find it on my searches so apologies if it is there somewhere.
First to say I have a reasonably complex ST environment 100+ devices, sensors, webcore etc so I have a reasonable understanding of much of the tech and the app etc.but I just can’t find what I want.
is there anyway via ST or webcore - or other addon to interact with ST ? (I suspect the answer is no - unless tasker or something does it)
What I mean is for example if I go in the home theatre room and start to watch something I would like a prompt to say do I want some other action done such as turning off some other lights or equipment etc
Or if I go to my gym (which is in an outbuilding) do I want the house alarmed, lghts changed etc or not - my choice depending upon how long I would be there or just passing thu’ it to another room etc.
I know all these can be done without choices automatically but in some instances the variables are just too complicated or I simply want to be asked what I want to do.
So I want ST to be intelligent to say I know you are doing ABC so tell me if you want me to do XYZ - it’s that prompt/question/choices piece I have no idea on.
Surely someone has tried this before ? or has it been looked into and is just not possible. Is there a smart app ?
Sounds like you are looking for intelligence more than automation. There has been some work in this area from other companies (Amazon recently added a feature called hunches to Alexa), but ST doesn’t have this kind of interactive interface.
As @TonyFleisher mentioned, all of the big corporate AI voice assistants are working towards this as a general concept (Siri’s “suggestions” are pretty good now), so Samsung may eventually provide it through Bixby, although " eventually" may be 3 or 4 years away.
Meanwhile, I’ve tagged a few people who may know what’s possible with ST today, although like the Jarvis project, setup may be somewhat complex.
And for those who aren’t familiar with the new proactive Siri suggestions in iOS 12, check out this article. Many people find themselves using them without even realizing that it was Siri who suggested the action.
How do you want to be prompted? In your Home Theater example, do you have a wall mounted tablet or voice assistant or some other interface already in place?
As you alluded to, some people are doing this already with Tasker with the help of SharpTools for Android. Most of the examples I’ve seen are relatively basic things like displaying/playing a message at 9 PM that the front door is still unlocked and prompting if the user wants to lock the door or leave it as it.
That being said, the sky is the limit with what you can actually achieve with it… I’ve seen some really deeply integrated voice prompts and custom UIs built with Tasker scenes, so it’s all about how much effort you are willing to put into it if you want to go that route.
While having a truly smart system to infer what you want or even guess what to prompt you with sounds awesome, it sounds like with a few key pieces of data/triggers you could probably build the key parts of what you want.
Thanks Joshua, that’s exactly it the top graphic, nothing too complex but I can program ST to recognise certain situations - and then prompt to perform certain addition actions, great didnt know I could do it so neatly with Tasker. thank you will research that approach.
To be clear, that’s done with Tasker + SharpTools. Both are third-party apps with paid license fees.
SharpTools is designed specifically to work as a bridge between SmartThings and Tasker (as well as other android options). It’s very popular in the community.
Thanks , yes have already paid for Tasker, just looking at whether I want to pay for the sharp tools plugin etc, I’m considering options - at the moment this is favourite as the slickest etc
I’m not sure how much of that video I actually believe. #1 its’ from 2013. #2 all of the links in the video are dead.
You never know how much of it was pre-programmed versus actually taking prompts from his voice. Seems awfully far-fetched that it would be that intelligent. Google can’t even get half my commands right unless they are spot on.
It’s by DroidKC who was really into AutoVoice, Tasker, and Vera at the time (he’s still active on Reddit). From what I can tell, there’s not a lot of ‘intelligence’ in it per se…
But some of the things like support for one or multiple devices being controlled in a single command has been supported with AutoVoice since 2013. AutoVoice is effectively using Google’s voice to text algos and then AutoVoice and Tasker take over with a bit of special sauce for pattern matching and variable extraction. AutoVoice has some really nice features for things like synonyms and word substitution and when you really dig into it deep you can setup some really powerful stuff.
That being said, I’m sure he spent hours upon hours tweaking it just to cover the edge cases for his particular setup whereas with Google/Alexa it’s plug and play and works for most things you throw at it. And Google/Amazon have the monumental task of making something plug and play for everyone rather than a set of cases for a single apartment!
And a single voice. How many hours of custom programming did he invest and how many tried did it take to get the video just right? What I’m saying is that people shouldn’t expect that level of responsiveness from their own system unless they’re willing to invest that kind of time.