Thank you @Terri_Baker for pointing out the Voice Monkey Skill
I may have found a better solution for all this virtual contact sensor nonsense using that skill to trigger voice announcements, or technically any routine in Alexa you want, from Webcore. Plus, you can use your variables in Webcore as part of the announcement with this. This was something I was going to miss the most in Echo Speaks. Also, with the webcore webrequest, you can bypass IFTTT and send the request straight from Webcore to the Voice Monkey skill
Instructions to setup the skill are here voicemonkey.io . Once setup, create a routine in the device you want to speak that opens the Voice Monkey Skill like this:
Then, in webcore, setup whatever trigger you want, and as action do “make a webrequest” using location. You will do a GET request like this. Replace the access token and secret token with the one generated in your own VoiceMonkey Skill (my own is blacked out of course). Also replace “announce-door” with the name of the “monkey” you created in your voice monkey skill (more info on what that means in the Voicemonkey documentation at the link above). Put whatever you want her to say in the quotes after urlEncode, including quote breaks and any variable in your piston you want her to say. My example tells the weather using a global variable “@forcast”:
This skill looks like it has awesome potential. It was a little complex to setup, and does seem you would need to make a separate web request and routine for each device you want to have speak, so could be a pain if you want to announce on multiple devices, but still seems to work well for a lot of scenarios.