[RELEASE] cast-web v1.2.1 - Chromecast Device Handler & SmartApp

I had a little trouble setting it up on my Windows 10 machine (mostly because I’m nowhere as tech savvy as everyone here) but I eventually got it working after some web searching and tinkering around. Thank you, Tobias, for putting together such an awesome way to use my Google Home speakers via my SmartThings hub!

Setting up a CoRE piston, I’m able to set up a task to Speak, Speak Text, Speak Text and Restore, or Speak Text and Resume. I noticed that the Speak Text and Resume task doesn’t seem to continue playing whatever music may have been playing on that speaker. Is that expected?

I’d love to hear what other use cases people are using for this integration. What tasks do you have set?

I’m sorry I don’t think that this is going to work. In theory it might be possible since DLNA is as far as I know in the end just playing back standard http urls, but I don’t think that will be an easy solution.

If you get an URL that points to a standard voice file that should work. However using Speaker companion or WebCoRE it is not possible. Samsung provides the Text to speech (TTS) service and it only supports the current voice ‘Polly’. I saw some hints for multiple TTS voices in some old ST smartapps, but it seems like this project has been abandoned.

Create a Routine in the mobile app that at a certain time turns on either specific devices or a group. All cast-web devices act like a standard switch.

You can set up the switch behavior be selecting the device in the mobile app and then clicking the settings wheel. Here you have a menu where you can select what switch on or off does.

1 Like

Thanks @vervallsweg, I already figured that out before seeing your response.

If I may suggest an improvement for the DTH- “Next” and “Previous” song switches should select the next or previous presets as though it were a carousel of presets.

E.g. if the DTH is playing:

  • Preset #3, the “Next Song” switch will play Preset #4.
  • Preset #6, the “Next Song” switch will play Preset #1.
  • Preset #4, the “Previous Song” switch will go to Preset #3.
  • Preset #1, the “Previous Song” switch will play Preset #6.
1 Like

@kebel871- I have been using Ulises Mujica’s “Media Renderer Events” smartapp to send rendered TTS messages to my cast-web-api connected Google Home speakers. My kids love it as I have it play a special personalized message for them each night at bedtime.

The TTS settings I use in the Smartapp-

  • Mode: SmartThings (misspelled in the repo as “Smartings”)
  • Voice: en-US Salli - for French Canadian there is fr-CA Chantal, but there are also fr-FR Celine & fr-FR Mathieu available.
  • Google Voice, RSS Language,Voice RSS Key, Alexa Access Key - you can probably leave these with their default settings.

I’ll have a look into this, thanks a lot. I’ll get back to you as soon as I’ve tried to make it work. I should be able to grab a GH mini this weekend if they’re back in stock.

I’m trying out the docker container, and in a web browser I get a response “cast-web-api version 0.2.2” but from the smart app if I do Test API it just spins forever… Any ideas?

Edit: updating IP to IP:Port seems to have helped a bit… it still just spins away though… unsure the issue.

What kind of delay should I expect? Let’s say I open a door which makes WebCoRE fire a piston that lead to the execution of action needed for TTS in French. Do you think the notification will be heard reasonably quickly?

As of now, I’m using a dumb speaker coupled with LANnouncer (+an old android) and it’s been solid and snappy. I just wish I could replace this with a Google Home.

There is perhaps a 2-3 second delay when I trigger it. Not sure what delay a WebCore piston would introduce.

Thanks for the info. Webcore would add anywhere between 100 to 500 ms in my experience.

A 2-3 seconds delay is acceptable for my needs. The only use case it might be too long is the reminder to take out thrash when front door opens on thursdays.

Docker networking should be configured as host. If you have other containers running on port 3000 (e.g. homebridge) these will conflict since both use port 3000 and latest to start will exit.

I had it on bridge and switched to host. Still no change… nothing else other then grafana on that port. Which I disabled prior to testing.

EDIT: Nevermind, host did work. Thanks!

Well I have no clue about docker, but the API supports changing port with the --port argument.

Yep figured that by now. Building new one will publish here after testing

1 Like

This docker image is same as other one that was posted in this thread but with the startup CMD removed so when you create container use your CMD with your choice of IP and PORT.

That’s a good idea. Thought about getting the play and resume stuff to work with presets. So if one of your presets is playing and you send a notification, it would continue playing after the notification is done.

As soon as I implement that, I’ll implement your idea as well :+1:

Added it as a feature for next versions

1 Like

I’ve decided I want to put this API onto my Synology rather than my Windows 10 machine, but may need a bit of help to figure out which step I’m missing.

  • I’ve set my Synology to a fixed IP on my network.
  • I’ve installed Git Server 2.11.3-0116 on the Synology.
  • I’ve installed Node.js v4 4.8.4-0164 on the Synology.
  • I’ve installed PuTTY on my Windows 10 machine and can successfully root the SSH for the Synology.
  • Running # git clone https://github.com/vervallsweg/cast-web-api.git create a success message. I then am able to switch into that newly created local repository.
  • Running # npm install forever -g provides the following response:

npm WARN optional dep failed, continuing fsevents@1.1.3
/usr/local/bin/forever -> /usr/local/lib/node_modules/forever/bin/forever
forever@0.15.3 /usr/local/lib/node_modules/forever
├── path-is-absolute@1.0.1
├── object-assign@3.0.0
├── clone@1.0.3
├── colors@0.6.2
├── timespan@2.3.0
├── optimist@0.6.1 (wordwrap@0.0.3, minimist@0.0.10)
├── cliff@0.1.10 (eyes@0.1.8, colors@1.0.3)
├── nssocket@0.5.3 (eventemitter2@0.4.14, lazy@1.0.11)
├── prettyjson@1.2.1 (colors@1.1.2, minimist@1.2.0)
├── winston@0.8.3 (cycle@1.0.3, async@0.2.10, stack-trace@0.0.10, eyes@0.1.8, isstream@0.1.2, pkginfo@0.3.1)
├── utile@0.2.1 (async@0.2.10, deep-equal@1.0.1, i@0.3.6, ncp@0.4.2, mkdirp@0.5.1, rimraf@2.6.2)
├── shush@1.0.0 (strip-json-comments@0.1.3, caller@0.0.1)
├── nconf@0.6.9 (ini@1.3.5, async@0.2.9, optimist@0.6.0)
├── flatiron@0.4.3 (director@1.2.7, optimist@0.6.0, prompt@0.2.14, broadway@0.3.6)
└── forever-monitor@1.7.1 (minimatch@3.0.4, ps-tree@0.0.3, broadway@0.3.6, chokidar@1.7.0)

  • Running node castWebApi.js --hostname= --port=8080 provides the following response

const { EventEmitter } = require(‘events’);

SyntaxError: Unexpected token {
    at exports.runInThisContext (vm.js:53:16)
    at Module._compile (module.js:373:25)
    at Object.Module._extensions..js (module.js:416:10)
    at Module.load (module.js:343:32)
    at Function.Module._load (module.js:300:12)
    at Module.require (module.js:353:17)
    at require (internal/module.js:12:17)
    at Object.<anonymous> (/volume2/Git/cast-web-api/cast-web-api/node_modules/mdns-js/index.js:14:40)
    at Module._compile (module.js:409:26)
    at Object.Module._extensions..js (module.js:416:10)

I’m not sure how to proceed. Anyone willing to tell me which step I missed? Looking at how I set it up on my Windows 10, I was able to clone the Git, switch to that directory, run npm install then run the same command that’s giving me a token error so I’m stumped for now.

I have cast-web running and I’ve been able to make GH speak using webcore.

I also installed media event renderer but I must say I’m a bit in the dark here.

Since media event renderer doesn’t create a device, how do I have it render French message instead for cast-web?


I installed it today and it works flawlessly. Thanks a lot.

I don’t know if you’re still developing cast-web actively but you do, I wonder if you could take a page off this SmartApp in order to let us use more TTS services and languages. I would really really be helpful to me.

If you don’t, do you know what modification i’d Have to try to do to use google TTS for rendering? I’m far from a coder unfortunately :confused:

1 Like