TweetBulb: a natural language interface to smart light bulbs

Hello forum,

TweetBulb is an experiment of the idea “Texting as a Universal UI”, i.e., controlling things via a text message, without any specialized apps. The underlying core technology is a natural language semantic parser that converts text to API calls.

We are rolling out a beta version here:

http://tweetbulb.kitt.ai/

Would love your input on improvement! Thanks!

Xuchen,
P.S., if you are more interested in using the semantic parser, here’s a demo too:

2 Likes

I am interested in your NLP implementation.

That’s weird. The video above is hosted on Github. If you were talking about the YouTube video not available to you, I have known that YouTube living streaming doesn’t work in some countries. But if you are in the states, it should be fine.

The underlying NLP component is a semantic parser called Parsetron: GitHub - Kitt-AI/parsetron: A natural language semantic parser

It’s Apache licensed.

I’d be happy to know what NLP applications you have in mind!

I edited my post. I can see the video now. I have built some things using https://wit.ai which was acquired by facebook. Their’s is a cloud hosted solution.

I am working to build a speech interface that takes intents and turns them into actions with minimal user setup. In my application, a user simply defines their locations, zones, and things. What I want to happen when a user says “Turn on the hallway light” or “Turn on the light in the bathroom”, the system will look at the user’s zones and then for any devices of the type light in that zone and turn it on. I have it functioning fairly well with wit.ai, but I stopped to focus on amazon echo integration first.

My app is open source on github as well. https://thinglayer.net

How’s your experience with the latency with ASR from both wit.ai and Echo?

By “look at the user’s zones” you mean detecting it with a motion sensor? Then the user wouldn’t have named the location right? The user would’ve just said “turn on the light” while standing in the hallway.

We are looking into more detailed analysis of user commands using NLP. For instance “change the living room light to red and blink the bedroom lights in orange for 5 minutes”, or “turn all lights on in blue color except the hallway one”, etc.

In the demo I built there is a screen with a microphone button. The user pushes the button and their speech is sent to wit.ai and processed. wit.ai extracts the intent and sends it back to my server for the action.

For example the user speaks "turn on the living room light

Wit.ai sends back a json string that looks kind of like this my server’s api

{{zone:“living room”},{object:“light”},{action:“on”}}.

My server takes that info and acts on it or throws an error back to the user interface.

The system I have so far can change colors, set the brightness, switch on or off, read the status, etc. I only removed it from the current system when I open sourced.

1 Like