Best way to collect logs from the hub?

Here is my logstash config file. (this is working right now)

input {
 http {
 port => 9992
 codec => "json"
 filter {
json {
 # parse JSON in "message" field,
 # put resulting structure in "data" field
 source => "message"
 output {
   stdout { }
   elasticsearch { hosts => ["localhost:9200"] }

I’m noticing that elasticsearch and Kibana have no idea what the type information the logs are. Maybe I’m using it wrong or something. My impression is that it was supposed to glean what the data type is so that you can use it. It gets the dates good enough, but I’ve found that it’s necessary to mutate the data supplied in the logstash.conf file to get it to draw something. I was also looking into multiple indices. Right now, it seems to lump everything together. Ideally, I would like to have index st_events, then by device, then value and unit. Kind of hierarchically. Anyway, my logstash.conf filter section is below.

> filter {
>        mutate {
>        convert => { "name" => "string" }
>        convert => {"displayName" => "string" }
>        convert => {"device" => "string" }
>        convert => {"deviceId" => "string" }
>        convert => {"value" => "integer" }
>        convert => {"isStateChange" => "boolean" }
>        convert => {"installedSmartAppId" => "string" }
>        convert => {"isDigital" => "boolean" }
>        convert => {"isPhysical" => "boolean" }
>        convert => {"location" => "string" }
>        convert => {"locationId" => "string" }
>        convert => {"descriptionText" => "string" }
>        convert => {"description" => "string" }
>        convert => {"id" => "string" }
>        convert => {"unit" => "string" }
>        convert => {"source" => "string" }

>      }
> }

Basically, I’m not interested in counts, I want useful graphs where multiple pieces of information can be abstracted upon one another. (i.e. correlating switches on with electricity usage…)

Edit: I’ve tried the following, which is adding each major category to an index of its own. Any thoughts @btk?

> output {
>     stdout{}
>         if [name] == "power" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "power"
>                         }
>         } else if [name] == "contact" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "contact"
>                         }
>         } else if [name] == "switch" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "switch"
>                         }
>         } else if [name] == "temperature" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "temperature"
>                         }
>         } else if [name] == "motion" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "motion"
>                         }
>         } else if [name] == "lock" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "lock"
>                         }
>         } else if [name] == "humidity" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "humidity"
>                         }
>         }
> }

Just a small update. Initial State just came out with an log export function. Initial State might now appeal to loggers AND people who wanna show off cool graphs and gauges. :smiley:


I don’t know how far down the path you are, but I discovered this project in the repo that logs to Fluentd and doesn’t require a trip to the cloud (internal addresses route fine to Logstash / Fluentd. Should be easy enough to tweak to get what you want out of it.


I’m attempting to get this to post to splunk through the http collector

I’ve modified the last line of the file as such

	sendHubCommand(new physicalgraph.device.HubAction("""POST /services/collector/event HTTP/1.1\r\nHOST: "${splunk_server}:${splunk_port}\r\nContent-Type:application/x-www-form-urlencoded\r\nContent-Length: ${length}\r\nHeaders: "Authorization: Splunk ${splunk_token}"\r\nAccept:*/*\r\n\r\n${command}""", physicalgraph.device.Protocol.LAN, "${splunk_server}:${splunk_port}"))

If I can get this to work I’ll gladly share the changes with everyone else who wants to ship their data off to splunk.

So with Splunk I’m using the http event collector to have ST ship the data off over HTTP to Splunk. The HTTP collector requires that we use an authentication token and it has to be passed in as a header


curl -k  https://localhost:8088/services/collector/event -H "Authorization: Splunk B5A79AAD-D822-46CC-80D1-819F80D7BFB0" -d '{"event": "hello world"}'

However I don’t think that I"m passing in the header properly in the above syntax. Can anyone lend a hand with this and thank you @FracturedLogic for sharing this code with us and I can’t wait to get this shipping its events off to splunk.

1 Like

Ok so I think that I like Brian Keifer’s code better but I am open to both of them. I’m no developer but I can script the crap out of some things with bash lol, and I can get a fairly good understanding of what I’m looking at. So this is what I have so far and I think that it wants to run but I’m getting this error

 java.lang.ClassCastException: org.codehaus.groovy.runtime.GStringImpl cannot be cast to java.util.Map @ line 165

I have the code here please feel free to modify the code and check in your changes and I’ll commit them or I can just update directly :slightly_smiling:

Thank you all for your help I think that I am close to getting this to work. I’m not sure if this will be trying to hit the server from the cloud or if the hub will just send it directly to the internal splunk address. Figured I could figure that out once I get past this gstring error.

EDIT: So I’ve now got it to hit my splunk server from the cloud to internal. However I’m getting a 400 error from splunk now. I’ve checked in my latest changes to git and I’ve posted here
with a question because I’m not sure if the json object that’s being created is splunk compliant. Since this was originally written for logstash I think that splunk isn’t happy with the formatting.

I’ve been looking here at their example and I wonder if there has to be a specific order of the items in order for splunk to be happy with it. We’re getting there but not quite there yet.

It brings me great pleasure to announce that I have used Brian Keifer’s code and successfully created a splunk event logger.

What is needed in order for this to work?

You will need to enable the http event collector in splunk. How do you do this?
Go here for direction

If for some reason you don’t see it in your input section you’ll need to disable the dbx app.

To do this ssh into your splunk server (assuming its on linux, windows should be the same process) go to ${splunk_home}/etc/apps and move the dbx app somewhere else then restart your splunk server

Once you are done with that install the app and publish it.

Fill out the variables with your splunk URL, port and token. Now keep in mind that this is going to be coming from the ST servers directly so you will need to open up a port on your Firewall to accept the traffic in from ST.

BTW does anyone know what the ST subnet(s) are? I’d like to restrict my Firewall to their source only. Looking in my logs at where the traffic is coming from seems to be coming from several different IPs some start with 54 while others start with 24.

Anyways we finally have a splunk logger for all of us.

What needs to be completed? The last value in the json should be the time and splunk wants the time to be in epoch format. Now with ST I can use the now() function with the time but then I get some java errors in the logs and everything just breaks so for now I have left it off and splunk seems to be happy with it. Most importantly enjoy!

Brian, if you want me to completely write up my own app and not piggy back off of yours please let me know. Thanks.

Edit: In case you missed the link to the code up above its right here

Edit2: Now has the ability to send your logs directly to your splunk server directly on the LAN or if your using splunk cloud you can still send it remotely too.


@TheFuzz4 The only way that I could get the post to work is with the following syntax:

This will make it so you can post to a local IP instead of through the internet. It will stop the 400 Bad Request errors. I am using logstash, but you should be able to figure out how to adapt it to Splunk.

sendHubCommand(new physicalgraph.device.HubAction([
method: “POST”,
path: “/”,
headers: [
HOST: “${logstash_host}:${logstash_port}”,
“Content-Type”: “application/json”
body: json

1 Like

thank you @erocm1231 for the tips on that I’ll work on the code tonight to get it to hit internally. Also any thoughts on how I can collect everything from ST instead of just events?

You could do what @btk does. Add the following method to any device handler or SmartApp that you want to log data from. Kind of tedious, but it is probably the only way you can do it.

Jason, Thanks for the info. Working great with Splunk, one suggestion is to have option to enable SSL, not sure how you do that within the smartthings app. I will try to just change http -> https in the smart things app tomorrow and see if that works.

Have you made any dashboards/apps in Splunk yet for Smartthings, or just pulling data?

Did you get anything yet for pulling all the events? I would like to do that as well to generate the dashboards with default info, then us the logs to update values.

BTW, how do you add more devices in the “doSubscriptions” section?

I have a z-wave door lock that isn’t showing up in the device list.

@tawollen I’ll work on figuring out the code for the true/false on the SSL option but yeah for now just update the code with https should do the trick.

So for the dashboard in splunk. Right now I have a dashboard that shows me my current energy usage, current energy production (Just using radial dials for that) and a timechart with avg energy usage vs energy production over a 24 hour period.

Now what I need to figure out is the Aeon HEM v2 checks in with a new energy usage every 2 seconds which works great in splunk to show you your current usage. For my solar pollster updates my solaredge device every 10 mins or so (since solaredge only calls to the mothership about every 10 mins).

Here is the delima though with the energy usage/production. So the logger doesn’t capture all of the data that the HEM or SE puts out i.e. amps, total usage etc so I’m not sure how to do the math to break out the current usage into real time so I can get a total energy usage for the day. If anyone knows the math for that and would be willing to educate us that would be greatly appreciated. I was thinking like take current energy usage then divide it by 60 for minutes in the hour then multiply that by 24 but I’m sure I’m no where near correct lol.

So while writing this up just came up with another great idea for a dashboard panel I can show my actual current energy usage by taking that value and my solar production and do the math there :slightly_smiling:

Will keep this thread updated as I figure out more coding things

So I tried to use what @erocm1231 mentioned above about putting the debug in the smart apps but I couldn’t get it to actually fire it off so I’ll have to mess with it some more here soon.

Um I’ll have to look in the ST docs for the zwave door lock to see if I can figure out how to add that in.

Ok need some code help here

So I’ve written in this section to my code

def http_protocol
    log.debug "Current SSL Value ${use_ssl}"
    if (use_ssl == true) {
      log.debug "Using SSL"
      http_protocol = "https"
    else {
     log.debug "Not Using SSL"
     http_protocol = "http"
     log.debug http_protocol

   def params = [
        uri: "${http_protocol}://${splunk_host}:${splunk_port}/services/collector/event",
        headers: [ 
            'Authorization': "Splunk ${splunk_token}" 
        body: json

Problem is although I’m setting the ssl to true ST keeps passing in as not true.

I’m not exactly sure what I’m missing here with this since the logs do show that I have the switch set to true but for some reason the compare isn’t working. Thanks.

Ok updated the code with the ability to enable/disable ssl. Feel free to go get it from the github and if you haven’t added me as a github source in your IDE do that as well so you won’t miss the changes when I push them. Thanks.

@tawollen since i don’t have a zwave lock I’m not sure exactly how to test it. If I give you some beta code to try out can you give it a whirl and let me know?

sure… I think i sent you a private message here…

I have installed an ELK server on Ubuntu 14.04 based on this guide.

I have port forwarded both 9200 and 5044 ports on my router.On ST live logging after installing the groove app by @btk.

The logstash configuration consists of 3 files (one for input,filter and output respectively)
[you can see them on Digital Ocean Link above]

Hitting on “Discover” on Kibana I see no data.

Can you please help me so that I have a working ELK with ST?

any help appreciated.

Is the data making it to ElasticSearch?

What do the SmartApp/Logstash/Elasticsearch logs say? Are there any errors?

at 12:00 the Log said "Updated with settings:"and then listed the chosen devices.
In the settings I have given https port 5044 as set in logstash conf and the ip is from DynDns
Some hours after that and continues with “debug” and json with data from devices and then “error” with " java.lang.SecurityException: Endpoint "is blacklisted @ line 164