Echo Speaks Examples

My typical “use case” for Echo Speaks is to create a virtual momentary button tile and have an Alexa routine that waits for a trigger phrase and then triggers the button tile. A Webcore piston waits for the momentary button tile to be pushed and then executes.

In the Webcore piston, I typically execute one or more “Speak” commands. The speak commands are hardcoded to speak using one or more Alexa devices. Ideally “Echo Speaks” would be able to target the speaker on which the routine was initiated from.

Unfortunately, there is no reasonable way to detect which echo the user spoke to initiate the routine. This is some concern because I have eight Echo devices located throughout my house and ideally responses would go to the nearest speaker.

My work around is to create a new virtual switch for each Echo device. I name these devices like LOCATION_Kitchen_Show, LOCATION_Bedroom_Show, and LOCATION_Office_Dot to support my Kitchen Echo Show, my Bedroom Echo Show, and my Office Echo Dot. I then create one Echo Routine for each Echo device which waits for the phrase “Location Kitchen”, “Location Bedroom” and “Location Office”. The routines all trigger a momentary button tile called “Location” and then they turn on the appropriate switch. When the location button is pressed, a Location piston is called to turn off all of the switches named LOCATION_Kitchen_Show, LOCATION_Bedroom_Show, and LOCATION_Office_Dot. When that piston exits, the calling Alexa routine turns on the appropriate switch.

Any of my pistons that use Echo Speaks, examines the total of 8 switches to see which switch is on and sets a variable in the piston which is used to call the “Speak” function.

So, I have a routine called comfort report that gives inside and outside temperatures, relative humidities, and thermostat mode and set point.

I simply say, “Alexa, Location Kitchen” and then "Alexa “Give me a Comfort Report” and then the routine runs with its output in the kitchen. Note that all my pistons that use this location logic run in the kitchen until I issue a command such as “Alexa, Location Office”.

I know this is not pretty, but it works. Anyone else have a better use case to achieve this functionality?

2 Likes

I have created a capability to allow pistons that use “Echo Speaks” to speak to a dynamically selected Echo device. This is achieved with a Piston that I have written called “Location”. I am posting this piston verbatim because the non-randomized version is needed to see what is going on. Basically, I have created 1) A virtual switch for every Echo Device that I have named things like Location_Office, 2) I have a Virtual button called “Location” that calls the piston when it is pushed, 3) I have one Alexa Routine for each Echo device. Each of these routines presses the Location button to call the piston and then presses the virtual switch for that Location .eg Location_Office. The piston sets a GLOBAL Webcore variable named @Echo_Location. Then, any other Piston that I have written uses @Echo_Location for its “Speak” commands. Actual usage is like this: “Alexa, Location Office”, Alexa responds, "Location is now set to Echo Office Dot. Then all of my Echo speaks commands that use @Echo_Location speak to Echo Office Dot until I give another location command.

3 Likes

I am doing something similar except I’m using room occupancy to determine which Alexa speaks a command. If Den and Living are occupied then this are the echo devices that will speak. If a room is vacant and so.ekne walks in change in occupancy triggers a piston. It’s a mixture of @bamarayne, @bangali and global piston that make the magic happen. Still tweaking and adding to it but working well.

2 Likes

@BBoy486, I did look at your work and you have some great utility. The reasoning behind my “Location” piston is that I do not have sensors in every room where I have Echo devices. I have nine Echo devices as you can see from the code bits, but I have only three motion sensors. Room occupancy is a “Great” way to achieve true automation. I am thinking that the “Location” command might be a little more general purpose for other users.

Here is an actual example of my use of my “Location” piston example above.

I completely understand about the motion sensors… they really help a lot in making a room active or not. I bought more sensors just to add to rooms for this purpose. I also use other devices in the room… is the TV on? is a certain door open, or closed…

1 Like

To be clear it isn’t my work. It’s from everyone I mentioned plus @WCmore who is amazing with his implementation.

2 Likes

I do have Tasker but only had a short period, could you go into detail on how to make this happen?

I get it. I will tell you that with promos and discounts the sensirs drop in process all the time. Lowe’s was discounting the iros motion sensors a few weeks ago. I prefer the get zwave plus sensirs since you can plug them in. I ended up getting 5 and now can really tap into echo speaks, room manager, global variables and webcore.

What you’re trying to do is accomplish presence on echo/Alexa.

You could hypothetically do this strictly by audio, but it would require Alexa to always be listening and assessing relative volume of ambient noise. Aside from the obvious privacy concerns, I wonder whether this ultimately would be less than horribly complex…

My favorite sensor so far is the HSM200, aka EZMultiPli. They are expensive though. What is your recommendation for a really good motion sensor to do this type of thing with presence?

Ha! You beat me too it…

1 Like

Does anyone know what the searchMusic provider CLOUDPLAYER is and how I would add it? Is it an Alexa skill? I see adding third party music providers, but nothing about CLOUDPLAYER.

Iris is cheap right now but runs on battery. I like GE Z-Wave Plus Wireless Smart Sensor, Motion Only, Portable, White, for Scene Activation and Remote Monitoring, 34193 https://www.amazon.com/dp/B01KQDIU52/ref=cm_sw_r_cp_apa_i_qBXfCbCAVXV3E because you can plug it in or run off battery.

Looks like I installed a smartapp called CoRE, so its the webcore I should add or whats the difference ?

There’s a good FAQ on CoRE and webCoRE here:
https://community.smartthings.com/t/faq-what-is-webcore-and-what-was-core/59981

I only used CoRE a few times before switching to webCoRE, which I use a lot. Both are good, but webCoRE is more powerful, is updated more frequently, and I feel like there’s more community support for it.

1 Like

For those on iOS, I created a Siri Shortcut/WebCORE combination where you select from a list of speakers, then enter the text you want it to speak. It uses the “Execute Endpoint” integration in WebCORE.

First, the piston:

Once you create the piston, copy the piston id, and follow the instructions under “Executing Endpoints” under Settings > Integrations in WebCORE to obtain the endpoint url.

And the Shortcut:

Here’s an iCloud link for the Shortcut. Don’t forget to rename the devices in the Shortcut and match them exactly to the names in the “If” section of your WebCORE statements. Note also that the variables don’t need to match your Echo device names in SmartThings. Make sure you paste your WebCORE endpoint URL in the URL field.

This should also work with other TTS speakers like Lannouncer. Just make sure you replace the “SetVolumeSpeakAndRestore” command to “Speak” for devices that don’t support that capability.

Enjoy!

6 Likes

I do not use ios, so what does this do?

This lets you activate a “Shortcut” to send a TTS message to a speaker directly right from your iPhone or iPad. Shortcuts is somewhat comparable to Tasker in Android in that it lets you do some scripting to automate actions.

The example above will prompt you for what speaker you want to use, and for text you want to send to it, define variables to send to WebCORE, which would then execute the piston using those variables.

Here’s a gif:
announce

3 Likes

That’s pretty cool.
I just use EchoSistant. It gives you the option of either creating a complex shortcut and/or creating your automaton in WebCoRE.

An example I have is I have a piston that performs a bunch of actions for when I want to take a nap. I can then execute that piston via EchoSistant by saying any of up to five different phrases…

Great work on the piston with Siri. That’s going to help a lot of people.

1 Like