[RELEASE] Facial Recognition for SmartThings using AWS Rekognition

This project will help you add facial recognition to ST by using a Raspberry Pi and a Raspberry Pi Camera and turning it into a smart doorbell, which is pretty cool. You can of course use this project for other things as well, like a security camera where it will capture movement and if there is a face detected, warn you that somebody is around.

The raspberry pi is required as it is where the code is hosted. So whenever there is a trigger (motion, button pressed, etc.) your automation tool of choice (in this case webCoRE) will make a GET request to your raspberry pi to capture a picture and then send that picture over to Amazon Rekognition for analysis.

The script works for all versions of raspberry pi. When an image is captured, there are 3 possible scenarios. Depending on the scenario’s, the parameters that the script sends over to webCoRE defers:-

When there is a face and Amazon Rekognition recognises a face, it will pass these parameters:-

‘person’ – the name of the person identified
‘similarity’ – a percentage value of how similar it has matched the face
‘confidence’ – a percentage of the confidence score that it got it right

f there is a face detected, but Rekognition failed to identify the face, it will pass these parameters:-

‘person’ – “Unknown”. It passes unknown so that you can create rules in your statement to handle this accordingly
‘faceConfidence’ – a percentage value on how confident it was that there is a face in the image capture
‘ageHigh’ – the maximum age of the face seen in the image capture
‘ageLow’ – the minimum age of the face seen in the image capture
‘gender’ – the gender of the face identified in the image capture
‘genderConf’ – how confident it is in the gender in percentage
‘mustache’ – a true/false entry whether the face in the image capture has a moustache or not
‘sunglasses’ – a true/false entry whether the face in the image capture has a sunglasses on or not

If it did not detect a face in the image capture, the script will only send a single parameter:-

‘person’ - No. It passes this so that you can create rules in your statement to handle this accordingly

Full install details is over here:-

Here are some screenshots:

This is what you’ll see when you manually execute the script on a browser:

This is a sample piston:

Let me know if you bump into any issues :slight_smile:


So cool! I don’t know if you are planning enhancements but would you consider adding an image url downloader as an alternative to the pi camera? Lots of camera systems provide an HTTP API to retrieve a snapshot. Then our automation tool of choice could provide the pi with the url of the image (or the url that retrieves the snapshot) and eliminate the need for the pi camera.

That is a really good idea. Let me see what i can do! I initially had it set up to work with Arlo and Nest but found the delay to be just to huge and most of the time, when it snapped the picture, the persons face was no longer in the frame.

I’ve been meaning to mess around more to see what can be done on those cam’s…and hopefully to extend the script to cover more cameras in the future.

Is there or will there be an integration that would disable SHM upon recognition of a familiar face?

You can do this via webCoRe and this app. You can set something like this:-

If person is any of name1, name2, name3
Set SHM to Disarm

Hope that helps!

1 Like

Sweet! I’ll have to break down and figure out webCoRe. I read up on it a good bit ago, but I’ve slept since then… I think it was when I was trying to set up HA, learn Linux commands, YAML, and Python. I got overwhelmed and gave up on all of it lol.

This is really cool. I will check this out. Do you know how fast it is? How long time from there is a person in the frame until it triggers an action in smartthings?

I will use this in the bathroom. Today I have an echo there. I have an IKEA 5 button which triggers echo to play different music depending on the button pushed. I would use this to recognize who is in the bathroom and play music accordingly. I don’t want the camera to see the whole bathroom constantly so I’m thinking of mounting the camera on some kind of motor. Then a door sensor can trigger the motor to turn the camera facing the bathroom for a short time and then turn it back to face the wall or maybe just turn it of.

In my deployment, it is pretty slow. About 6-7 seconds. It doesn’t seem to be the code, but just how the triggers work on the cloud. I would suggest to try it out first on a simple environment and see how quickly it works on your end.

I’m having issues getting the args to pass correctly to webcore - I’m pretty sure it’s my GET request that’s the problem. Can you tell me what that should look like?

Sorry for the late reply…can you share some the log file on webcore? If you follow the sample piston i have shared above, webcore should capture the arguments correctly.

Pushover integration doesn’t seem to work for me. Enter my keys but nothing is ever sent. I get the following in terminal. Any ideas, thanks?

File “/usr/lib/python2.7/dist-packages/requests/adapters.py”, line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host=‘graph-’, port=443): Max retries exceeded with url: / (Caused by NewConnectionError(’<urllib3.connection.VerifiedHTTPSConnection object at 0x755674f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname’,))