Why long delay for custom spoken messages?

I was playing with the custom spoken message options yesterday and I noticed that there was a significant delay when I executed the trigger and when I heard the text-to-speech coming out of the speakers. When I say significant, it was several minutes. Is it usually like this?

Now, for those of you with good memories, I realize the irony of the above statement given my post last fall… :smile:

This particular SmartApp (Sonos Notify With Sound) is something that I use during live demos. It’s usually very quick. I’d expect the audio to start playing about 3 seconds after the trigger.

Hmm…maybe it is an issue with the @obycode version of it. I’ll follow-up with them to see, although I thought theirs was based on the Sonos version. I recall there being an issue in the Sonos code somewhere that was preventing the their app from working properly, so they had to rework the Sonos code.

I was trying to do something fun with my daughter while I showed her how the app worked and you logically approached things. But the delay killed the joke a bit! :frowning:

AirPlay… Is… Sloooooooooooooooowwwwwwwwww

Unless this is truly sonos. Then idk lol

It is AirPlay. However, music starts up within about 10 seconds. Only an issue with the text-to-speech. :frowning:

The way oby code does it is weird. Not sure if I have ever fully understood how it works. It essentially downloads a file to your PC every time it
Parses text-to-speech commands. The speed of this might be dependent on the health of the Obycode servers.

It is downloading a file, but not from my servers. It’s coming from SmartThings, the same way it works for Sonos. I’m not sure what would be causing a delay like that though.

1 Like

Gotcha.

I am wondering if the “cloud-to-cloud” nature of the sonos integration makes it faster then the current ObyCode iteration?

I always noticed at least a 5-10 second delay in spoken text with the ObyThing stuff.

Could be. The dumb part is that I could easily do the text-to-speech on the Mac, but the interface that the Sonos SmartApps use is to pass a URL instead of just passing the text.

2 Likes

Oh, I would love to see a bypass of the process and just use text to speech from the Mac. Way faster and you could have fun and select a voice you wanted.

1 Like

Agreed. Maybe in the next version (which I’ve been working on for a while off and on) should just scrap the Sonos SmartApps and just supply its own. capability.speechSynthesis has the speak method that just passes a string. I’ll support that instead of the custom methods the Sonos uses.

1 Like