Best way to collect logs from the hub?

Cloud version? I don’t want to run my own server. Been there before.

No idea. If I were looking for such a thing, I’d google “hosted logstash”. You’ll probably need some sort of front-end on it too. Kibana’s fairly popular, graylog2 is out there as well.

Hey @btk,

What are you using for your logstash conf file? Could you share an example?

Christopher

Continuing the discussion from Best way to collect logs from the hub?:

Using your SmartApp, I see it attempting to hit my server but I get the following error:

10:01:45 PM: error org.apache.http.conn.ConnectTimeoutException: Connect to 10.0.0.20:9992 [/10.0.0.20] failed: connect timed out @ line 163
10:01:20 PM: debug JSON: {“date”:“Wed Dec 02 03:01:19 UTC 2015”,“name”:“power”,“displayName”:“Aeon Home Energy Meter”,“device”:“Aeon Home Energy Meter”,“deviceId”:“”:“DEVICE”}

I am able to connect to my server with telnet and I see the data on my server.

ie:


telnet 10.0.0.20 9992

testing 123


the following appears in the log

{“message”:“testing 123\r”,“@version”:“1”,“@timestamp”:“2015-12-02T03:04:47.342Z”,“host”:“172.17.42.1:43307”,“type”:“syslog”}


my logstash.conf for the TCP section is as follows:

tcp {
port => 9992
type => syslog
}


Can you post your TCP section?

thanks

I realized that I was giving an internal IP address and it is coming from the cloud. I updated it with an external address and it seems to be going through.

I still get the following error

java.net.SocketTimeoutException: Read timed out @ line 163

thanks

Getting the same thing. If I use my Raspberry Pi’s (which is hosting this ELK stack) hostname, I get a ‘blacklisted’ error. Using the IP, I’m getting this one.

For the record, this is the logstash.conf I’m using:

> input {
>     syslog {
>         type => syslog
>         port => 9992
>         codec => "json"
>         }
> }

> filter {
>     json {
>         # parse JSON in "message" field,
>         # put resulting structure in "data" field
>         source => "message"
>     }
> }



> output {
>     stdout { }
>     elasticsearch { hosts => ["localhost:9200"] }

> #    elasticsearch {

> #       cluster => "SmartThings"
> #    }
> }

Here’s what I’m using:

input {
    http {
        host => "x.x.x.x"
        port => "xxxxx"
        codec => "json"
    }
}

output {
    elasticsearch {
        host => "localhost"
        index => "st_events"
    }
}

Here is my logstash config file. (this is working right now)

input {
 http {
 port => 9992
 codec => "json"
 }
 }
 filter {
json {
 # parse JSON in "message" field,
 # put resulting structure in "data" field
 source => "message"
 }
}
 output {
   stdout { }
   elasticsearch { hosts => ["localhost:9200"] }
    }
}

I’m noticing that elasticsearch and Kibana have no idea what the type information the logs are. Maybe I’m using it wrong or something. My impression is that it was supposed to glean what the data type is so that you can use it. It gets the dates good enough, but I’ve found that it’s necessary to mutate the data supplied in the logstash.conf file to get it to draw something. I was also looking into multiple indices. Right now, it seems to lump everything together. Ideally, I would like to have index st_events, then by device, then value and unit. Kind of hierarchically. Anyway, my logstash.conf filter section is below.

> filter {
>        mutate {
>        convert => { "name" => "string" }
>        convert => {"displayName" => "string" }
>        convert => {"device" => "string" }
>        convert => {"deviceId" => "string" }
>        convert => {"value" => "integer" }
>        convert => {"isStateChange" => "boolean" }
>        convert => {"installedSmartAppId" => "string" }
>        convert => {"isDigital" => "boolean" }
>        convert => {"isPhysical" => "boolean" }
>        convert => {"location" => "string" }
>        convert => {"locationId" => "string" }
>        convert => {"descriptionText" => "string" }
>        convert => {"description" => "string" }
>        convert => {"id" => "string" }
>        convert => {"unit" => "string" }
>        convert => {"source" => "string" }

>      }
> }

Basically, I’m not interested in counts, I want useful graphs where multiple pieces of information can be abstracted upon one another. (i.e. correlating switches on with electricity usage…)

Edit: I’ve tried the following, which is adding each major category to an index of its own. Any thoughts @btk?

> output {
>     stdout{}
>         if [name] == "power" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "power"
>                         }
>         } else if [name] == "contact" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "contact"
>                         }
>         } else if [name] == "switch" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "switch"
>                         }
>         } else if [name] == "temperature" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "temperature"
>                         }
>         } else if [name] == "motion" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "motion"
>                         }
>         } else if [name] == "lock" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "lock"
>                         }
>         } else if [name] == "humidity" {
>                 elasticsearch {
>                         hosts => ["localhost:9200"]
>                         index => "humidity"
>                         }
>         }
> }

Brian,
Just a small update. Initial State just came out with an log export function. Initial State might now appeal to loggers AND people who wanna show off cool graphs and gauges. :smiley:

3 Likes

I’m attempting to get this to post to splunk through the http collector

I’ve modified the last line of the file as such

	sendHubCommand(new physicalgraph.device.HubAction("""POST /services/collector/event HTTP/1.1\r\nHOST: "${splunk_server}:${splunk_port}\r\nContent-Type:application/x-www-form-urlencoded\r\nContent-Length: ${length}\r\nHeaders: "Authorization: Splunk ${splunk_token}"\r\nAccept:*/*\r\n\r\n${command}""", physicalgraph.device.Protocol.LAN, "${splunk_server}:${splunk_port}"))

If I can get this to work I’ll gladly share the changes with everyone else who wants to ship their data off to splunk.

So with Splunk I’m using the http event collector to have ST ship the data off over HTTP to Splunk. The HTTP collector requires that we use an authentication token and it has to be passed in as a header

i.e.

curl -k  https://localhost:8088/services/collector/event -H "Authorization: Splunk B5A79AAD-D822-46CC-80D1-819F80D7BFB0" -d '{"event": "hello world"}'

However I don’t think that I"m passing in the header properly in the above syntax. Can anyone lend a hand with this and thank you @FracturedLogic for sharing this code with us and I can’t wait to get this shipping its events off to splunk.

1 Like

Ok so I think that I like Brian Keifer’s code better but I am open to both of them. I’m no developer but I can script the crap out of some things with bash lol, and I can get a fairly good understanding of what I’m looking at. So this is what I have so far and I think that it wants to run but I’m getting this error

 java.lang.ClassCastException: org.codehaus.groovy.runtime.GStringImpl cannot be cast to java.util.Map @ line 165

I have the code here https://github.com/TheFuzz4/SmartThingsSplunkLogger/blob/master/splunklogger.groovy please feel free to modify the code and check in your changes and I’ll commit them or I can just update directly :slightly_smiling:

Thank you all for your help I think that I am close to getting this to work. I’m not sure if this will be trying to hit the server from the cloud or if the hub will just send it directly to the internal splunk address. Figured I could figure that out once I get past this gstring error.

EDIT: So I’ve now got it to hit my splunk server from the cloud to internal. However I’m getting a 400 error from splunk now. I’ve checked in my latest changes to git and I’ve posted here https://answers.splunk.com/answers/369266/output-smartthings-logs-to-http-event-collector.html?minQuestionBodyLength=80
with a question because I’m not sure if the json object that’s being created is splunk compliant. Since this was originally written for logstash I think that splunk isn’t happy with the formatting.

I’ve been looking here http://dev.splunk.com/view/event-collector/SP-CAAAE6P at their example and I wonder if there has to be a specific order of the items in order for splunk to be happy with it. We’re getting there but not quite there yet.

It brings me great pleasure to announce that I have used Brian Keifer’s code and successfully created a splunk event logger.

What is needed in order for this to work?

You will need to enable the http event collector in splunk. How do you do this?
Go here for direction

If for some reason you don’t see it in your input section you’ll need to disable the dbx app.

To do this ssh into your splunk server (assuming its on linux, windows should be the same process) go to ${splunk_home}/etc/apps and move the dbx app somewhere else then restart your splunk server

Once you are done with that install the app and publish it.

Fill out the variables with your splunk URL, port and token. Now keep in mind that this is going to be coming from the ST servers directly so you will need to open up a port on your Firewall to accept the traffic in from ST.

BTW does anyone know what the ST subnet(s) are? I’d like to restrict my Firewall to their source only. Looking in my logs at where the traffic is coming from seems to be coming from several different IPs some start with 54 while others start with 24.

Anyways we finally have a splunk logger for all of us.

What needs to be completed? The last value in the json should be the time and splunk wants the time to be in epoch format. Now with ST I can use the now() function with the time but then I get some java errors in the logs and everything just breaks so for now I have left it off and splunk seems to be happy with it. Most importantly enjoy!

Brian, if you want me to completely write up my own app and not piggy back off of yours please let me know. Thanks.

Edit: In case you missed the link to the code up above its right here

Edit2: Now has the ability to send your logs directly to your splunk server directly on the LAN or if your using splunk cloud you can still send it remotely too.

2 Likes

@TheFuzz4 The only way that I could get the post to work is with the following syntax:

This will make it so you can post to a local IP instead of through the internet. It will stop the 400 Bad Request errors. I am using logstash, but you should be able to figure out how to adapt it to Splunk.

sendHubCommand(new physicalgraph.device.HubAction([
method: “POST”,
path: “/”,
headers: [
HOST: “${logstash_host}:${logstash_port}”,
“Content-Type”: “application/json”
],
body: json
]))

1 Like

thank you @erocm1231 for the tips on that I’ll work on the code tonight to get it to hit internally. Also any thoughts on how I can collect everything from ST instead of just events?

You could do what @btk does. Add the following method to any device handler or SmartApp that you want to log data from. Kind of tedious, but it is probably the only way you can do it.

Jason, Thanks for the info. Working great with Splunk, one suggestion is to have option to enable SSL, not sure how you do that within the smartthings app. I will try to just change http -> https in the smart things app tomorrow and see if that works.

Have you made any dashboards/apps in Splunk yet for Smartthings, or just pulling data?

Did you get anything yet for pulling all the events? I would like to do that as well to generate the dashboards with default info, then us the logs to update values.

BTW, how do you add more devices in the “doSubscriptions” section?

I have a z-wave door lock that isn’t showing up in the device list.

@tawollen I’ll work on figuring out the code for the true/false on the SSL option but yeah for now just update the code with https should do the trick.

So for the dashboard in splunk. Right now I have a dashboard that shows me my current energy usage, current energy production (Just using radial dials for that) and a timechart with avg energy usage vs energy production over a 24 hour period.

Now what I need to figure out is the Aeon HEM v2 checks in with a new energy usage every 2 seconds which works great in splunk to show you your current usage. For my solar pollster updates my solaredge device every 10 mins or so (since solaredge only calls to the mothership about every 10 mins).

Here is the delima though with the energy usage/production. So the logger doesn’t capture all of the data that the HEM or SE puts out i.e. amps, total usage etc so I’m not sure how to do the math to break out the current usage into real time so I can get a total energy usage for the day. If anyone knows the math for that and would be willing to educate us that would be greatly appreciated. I was thinking like take current energy usage then divide it by 60 for minutes in the hour then multiply that by 24 but I’m sure I’m no where near correct lol.

So while writing this up just came up with another great idea for a dashboard panel I can show my actual current energy usage by taking that value and my solar production and do the math there :slightly_smiling:

Will keep this thread updated as I figure out more coding things

So I tried to use what @erocm1231 mentioned above about putting the debug in the smart apps but I couldn’t get it to actually fire it off so I’ll have to mess with it some more here soon.

Um I’ll have to look in the ST docs for the zwave door lock to see if I can figure out how to add that in.

Ok need some code help here

So I’ve written in this section to my code

def http_protocol
    log.debug "Current SSL Value ${use_ssl}"
    if (use_ssl == true) {
      log.debug "Using SSL"
      http_protocol = "https"
    }
    else {
     log.debug "Not Using SSL"
     http_protocol = "http"
     }
     
     log.debug http_protocol

   def params = [
        uri: "${http_protocol}://${splunk_host}:${splunk_port}/services/collector/event",
        headers: [ 
            'Authorization': "Splunk ${splunk_token}" 
            ],
        body: json
    ]

Problem is although I’m setting the ssl to true ST keeps passing in as not true.

I’m not exactly sure what I’m missing here with this since the logs do show that I have the switch set to true but for some reason the compare isn’t working. Thanks.