Another Amazon Echo Thread (March 2016) (technical/programming discussion)

I don’t think this is directly possible at this time without a custom Skill (or Bluetooth hack shared on the forum).

It may change someday, but Alexa cannot initiate speech without a specific request; and we cannot modify the built-in Smart Home “skill” to add new custom spoken responses.

1 Like

They are playing the notifications on the echo itself. @bamarayne can tell you more, but I believe they’re using landroid to create them and the echo just as a Bluetooth speaker at that point.

As far as “being done natively” did you mean natively to echo? In that case, it has to be done as a skill, they haven’t opened up the other path.

You can have echo play a custom MP3, or you can use it as a Bluetooth speaker as in the notifications example, but that’s it.

For anything else you need a cloud service acting as an ASK skill. So your echo commands become “echo, tell MySkill request” rather than just “echo, request.”

1 Like

I don’t pretend to understand the difference between an integration in the Amzon app and a skill; perhaps there isn’t any other than where it is presented, and no “skill name” needed for the app based integrations. But yes, the integration would need to be modified to call for the state data and parse the response for speech back to the Echo. Not sayin’ we could do it.

Yeah… that was kinda my next question. It seems this SmartApp ain’t ‘open’. So who’s responsible for the lion share of the development. Considering how well it works I’m guessing this is all Amazon’s baby.

I certainly didn’t glean that from his docs.


( you will see several skeptical posts from me, basically saying “really?” But, yes, really. :sunglasses:)

As for MPG, that’s a voice error. :stuck_out_tongue_winking_eye: Should have been MP3. :musical_score: I’ll fix it.

Yes, you can have the echo device play your statuses, or whatever. It does take a pretty good hack, but it works really well.

Note - this does not use the “Alexa voice”. It uses TTS, but is in the Alexa voice, just not as natural.

Basically, you do this…

Using either your phone or an old Android device you download and install LanDroid.
You then connect your device via BT to the echo.
Using either rule machine or Alexa helper, you use the LANdroid device as you speech output. It pays the sounds via the connected echo device.

Using Alexa helper you create your feedback reports, create the applicable virtual switch, and then you just tell Alexa, “turn on X report” (name of your virtual switch). And a couple of seconds later it plays.

My go to bed rule in RM does this. Everything turns off and it tells me the current status of my doors, windows, the temp upstairs and downstairs, and what the thermostats are set to. It then says that my house is safe and secure and to have a goodnights sleep.

It takes some practice and tweaking, but it works good.

Oh yeah, and if Alexa is playing music, it will pause the music, speak your report, and then resume playing.

Here is the link.

Thanks… but here that is not going to work. My Echo is permanently BlueToothed to my HAM Bridge server for its notifications via the Mac’s TTS (as well as iTunes playback). I suppose I’ll need to woof up a SmartApp that gets the states of all the devices and HTTP them as params to HAM Bridge for it to speak them.

1 Like

I’m a little bit confused by the question, but if I understand it…

An Alexa skill is code written by a third-party with a designated name that echo will recognize and start that code. That code is allowed to have echo speak its responses.

There are two types of skills, depending on whether echo needs to stay in listening mode after the first spoken command.

First There are interactive skills, where you can ask a question, get a response, and Echo is still listening and the skill code is still running so you can then respond again. This is what is used for playing games and some other options where you may need to drill down to get to the ultimate answer.

And there are one time skills were you just specify the skill and give it a value and then the skill code does something.

Both types are intended to be cloud services that echo is initiating because of the reserved name. I’m not sure what hosting options are available, I haven’t looked into all of that.

But none of these are smartapps within the SmartThngs system. Just like any cloud service, you could have your echo skill send a call to smartthings if you wanted, and I’m assuming that’s how the ones that get device status do it. So in that case you are writing both an echo skill, to handle the echo interaction, and a separate smartapp that will dialogue with the skill.

But echo itself never knows the device status and isn’t really involved except as a UI to the skill. It parses the voice input and then passes it to the skill and it can say the skill output. But it’s not involved in the logic in between, that’s all up to the skill.

1 Like

Or, you can go to Amazon and order a good quality small speaker with Bluetooth, actually it doesn’t even need that. It can just have an input jack do you can plug it into the Android device headphone jack. Hide that speaker somewhere in the room and use it instead of the echo.

That’s actually how I do it now. I use that setup in my other rooms, along with a Wi-Fi speaker that I got on clearance. That speaker is seen anywhere a Sonos is seen and eliminated the Android device.

I now just choose where I want certain things to play… Like I can tell Alexa, turn on dinner time and the speakers in the kids rooms tell them to come to dinner. But only those speakers.

Great thing about ST… With a little ingenuity you can get it to do anything… And… You can make it all voice controlled via Alexa!

I am guessing it is three. The third being native integrations like the SmartThings one which require no skill name needed to be used. I am not sure this is the only difference between a native integration and a skill though. BTW, I have the ASK SDK here and pretty much grok how they work.

The biggest issue with us customizing the Echo is not the hosting of the skill. I would be willing to do that as long as a ‘Let’s Encrypt’ cert is Amazon approved, which I think it is. The problem is the target, which also needs to be a certed HTTPS server (which I am not likely to expose to the net). I have thought about building a RasPi unit to expose as a proxy, but it would be so much cooler if the Echo had a bridge to your LAN in the same way the sendHubCommand works on SmartThings.

Because of this, SmartThings would be the next best target using an endpointed app.

1 Like

Amazon doesn’t call the official integrations “skills” although they probably could. But there’s a different communication pathway for those so internally they probably look a lot different to the Amazon engineering staff, although from a customer-facing viewpoint the only difference really is that you don’t have to say “tell.” But the integrations are closed, there’s no way for us to get to them.

1 Like

I have a number of them already (jawbones), but in my house, the range pretty much sucks. HAM Bridge can fire up every A/V system in my house (3 of them) and pipe audio to them via AirPlay, so this is the better route for me.


1 Like

Yeah, you already have the infrastructure in place. Just use Alexa helper and rule machine and you’re good to go.

Are you able to choose individual speakers?

1 Like

Plus, as I initially mentioned, I think they are hampered by what SmartThings makes available via their endpoints (I think this is kinda the chicken or the egg thing). I’d really like to know what the Alexa SmartApp is doing other than facilitating connecting to endpoints. In any event, after April 15th I’ll try to take a whack at a skill that targets the Physical Graph. Anyone know how return data from an endpointed app? Everything I have done along those lines just terminates at the app (command, notification, etc.).

1 Like

Individual systems. I use an AppleScript that directs whether the server and/or other AirPlay devices receive output. For instance, my main theater system gets ‘barking dogs’ when outside motion is detected and I am away.


It doesn’t need to do anything more than this and I doubt it does. What do you think it does??

Just run Live Logging and you’ll see a lot of (excessive?) debugging messages.

The messages are a little unusual, so I don’t think the SmartApp was written by the “usual” SmartThings SmartApp developers… I hear they contract stuff out sometimes, but the details are secret.

I’m not sure exactly what the Alexa SmartApp is doing. My own guess is that it’s just a conduit between the mobile app/platform and some real integration code (not written in groovy) that manages the interaction with echo including authorization of devices. So even if you could open the smartapp, I don’t think you’d actually be anywhere near The true integration code, and that that’s something that is restricted to official Amazon partners.

1 Like

I am not sure I care anymore. I now have Alexa successfully reporting device states, albeit with a Mac voice over bluetooth. It is good enough to convince me it is not worth the effort to pursue the SmartThings side of this.

But the Hue side has peaked my interest. It seems to step over ASK’s service target requirement of an Internet facing, secure server. Echo is pairing directly with the Hue bridge. This means Echo is capable of routing requests to your LAN from the cloud.

Hmmm… This never occurred to me before. I guess it was so much like the way that SmartThings connects to LAN devices (included Hue Bridge), that I didn’t give it a second thought.

So what are the implications?

That the hardware is capable of round tripping a request to a skill that can be targeted back to the Echo with a restful request to a device on your LAN. This would facilitate custom verbiage for the request, and hopefully a custom response for the skill as well.

“Alexa, tell HAM Bridge to fire up my home theater”

“OK, your home theater is ready, enjoy the movie.”

And of course take SmartThings out of the loop for all devices with a local API.

And it seems it is not only possible to send custom speech to the Echo from the skill, but back to the skill from the Echo after checking the response from a local rest call.

I know this due to me occasionally having GE lights on the bridge that are "possibly not reachable’ flagged, but still work. In this case Alexa tells me they were unresponsive when I ask to turn on, even though it was successful (which would indicate it parsed a response, and routed it back to the skill for speech delivery back to Echo). Way cool.

Now I may be simplifying this, but the fact remains Echo has access to my Hues with no account credentials, so it has to be local HTTP GET and PUT.

Now we just need to convince Philips to share this with us or figure out how to get our own fingers between the fan blades. I seem to recall something new in the API that lets you monitor the bridge for debugging purposes.