Best way to collect logs from the hub?

Ok updated the code with the ability to enable/disable ssl. Feel free to go get it from the github and if you haven’t added me as a github source in your IDE do that as well so you won’t miss the changes when I push them. Thanks.

@tawollen since i don’t have a zwave lock I’m not sure exactly how to test it. If I give you some beta code to try out can you give it a whirl and let me know?

sure… I think i sent you a private message here…

I have installed an ELK server on Ubuntu 14.04 based on this guide.

I have port forwarded both 9200 and 5044 ports on my router.On ST live logging after installing the groove app by @btk.

The logstash configuration consists of 3 files (one for input,filter and output respectively)
[you can see them on Digital Ocean Link above]

Hitting on “Discover” on Kibana I see no data.

Can you please help me so that I have a working ELK with ST?

any help appreciated.

Is the data making it to ElasticSearch?

What do the SmartApp/Logstash/Elasticsearch logs say? Are there any errors?

at 12:00 the Log said "Updated with settings:"and then listed the chosen devices.
In the settings I have given https port 5044 as set in logstash conf and the ip is from DynDns
Some hours after that and continues with “debug” and json with data from devices and then “error” with " java.lang.SecurityException: Endpoint "is blacklisted @ line 164

I know also removed https from the app and now it throws the “org.apache.http.NoHttpResponseException: The target server failed to respond @ line 164”

I’ve seen that blacklisted message before with incorrect addresses. Make sure you’ve got the right address in your SmartApp config.

If it still fails, you’d need to find out why it’s “blacklisted”. Perhaps ST support would be able to help there?

The Digital Ocean says about filebeat which actually replaced logstash-forwarder as the actual “forwarder” of the packages.
Does by any mean this has to do with the app?Probably supports the old logstash-forwarder?

If you’re using filebeat to get your data into Elasticsearch, you’ll need to write your SmartThings data to a file and then point filebeat at that file.

The SmartApp you’re using is designed to talk directly to logstash via the json codec.

I have been running for more than a year a periodic custom SmartApp which sends selected log events through one or several emails. Since ST retention is only 7 days, I send those Log Dumps every 5 days, soon enough to reactivate the SmartApp if its schedule fails.
Since I use those Log Dumps for statistics purpose, I send them as attached files, formatted for Excel retrieval.
I did recently update my SmartApp, migrating from Mandrill email web service to Mailjet, an alternate free web service (starting April 27, Mandrill will now cost a minimum of $360/year).

The main drawback of this email log dump solution is that some emails never get delivered (spam black lists on the recipients POP side), thus I use 3 recipient mailboxes from 3 different providers, hoping not all 3 blacklist MailJet SMTP servers at the same time.
And as for any periodic SmartApp, I get very poor reliability from ST cloud, scheduled SmartApps activations failing about 20% of the time…

Had I been aware of the “forever retention option” for $5 a month, I would probably have used it and saved me a lots of grief… :smiley:

I’m trying to implement the Splunk Logger that TheFuzz4 made, using SSL. Great work by the way!

I’ve got everything working with HTTP, but as soon as I enable HTTPS using LetsEncrypt SSL cert the log gives me the following error:

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

I can successfully send a test using curl to my https url:port.

Is there anything I can do, or do I have to get an SSL cert through a well known authority? Also is there an ability for the ST hub to directly send messages to my local Splunk instance rather than traversing the interwebs?

Thanks in advance for your help!

Hey there @adrabkin I need to write the code to use it locally I just haven’t done it yet been too busy with the kiddos at the house :slight_smile: Let me see what I can drum up here. BTW I just took the syslog code and then changed the send portion to shoot it off to splunk. For the SSL check the site out at https://www.ssllabs.com/ssltest/analyze.html?d= just add your site name after the =
Let it check it out and make sure that you have all of the right links in there. Dealing with SSL all the time at work I have to make sure that I have the appropriate cert chain in there in order for it to work. Since I haven’t dealt with adding ssl into splunk itself with the http forwarder I wonder if it would be possible to instead if you can stick an apache server in front of it and have it reverse proxy the request back to splunk for you?

I’ll try and get some time tomorrow to research the code to send it locally I know that I’ve been given the examples I just need to do it. I think that I’ll also add a switch in there for whether you want to use it local or remote as well so it will still work for those who are maybe using the splunk cloud. Anyways let me know how the SSL research goes. BTW love using LetsEncrypt thats who all of my SSLs are now through :slight_smile:

I discovered this thread late, but for those who may be interested in collecting logs thru emails, here is how to do it.
Beware that as everything else in SmartThings reliability is uncertain, the SmartApp which sends the emails may not wake up as scheduled, the receiving POP3 server may reject the emails as spam, the mailJet SMTP server may be blacklisted, etc…
YMMV…

Thanks!

I checked out the ssllabs.com link and I did have a chain problem which I resolved by concatenating the cert and chain files that where generated. Now I have no chain issues, ssllabs shows “Certification path: Trusted”, and I get an A- score, but I still get the “unable to find valid certification path to requested target” error in the live logs on smartthings :frowning:

Yeah not sure there then on that part. I haven’t had a chance yet to start the code work for the local. Let me see what the family at home is up to, tonight and I’ll see what if I can get some cycles to work on it.

Ok so that was actually a lot easier than I expected it to be.

If you go do a new pull from my repo you’ll get the ability to now send the data to splunk directly still using the HTTP event collector but it is now all in house. There is a true/false selector for if you want to use local or not. Let me know what you think.

BTW currently the ssl switch only works if you’re doing remote. I need to figure out how to do the code to disable the ssl switch if local is set to true. Let me see what I can do with that.

Wow you rock!! That was nicely done!

I was starting to mess around with getting the local comms working, but couldn’t quite figure out the proper formatting required for the sendHubCommand POST command.

Great work! I’m sure that little snippet will be reusable to many others in the near future!

[TL:DR] - Problems with local Splunk communications having to do with string length. Modified POST to re-order commands and include an explicit Content-Length in the header. See bottom of post for code snip

Sorry for the long post – I’ve made a few edits throughout the night :slight_smile:

So I’m running into a weird issue that I’m trying to troubleshoot.

I’m logging locally successfully for most of my events, however my MultiSensor device is not logging to splunk when I do the local logging. I change to External and everything logs successfully.

I’ve turned on the debug in the SmartApp to show the JSON, and the only thing I can think of is the JSON has more data (longer string) than my other devices. When I look @ my splunk search log, when I go through external, the successful log entry for my MultiSensor is 611 characters, while my others average around 570.

Could there be a limit to the sendHubCommand? I’ve seen in other versions of the sendHubCommand there’s a length variable that gets passed to the POST command…

Wondering if anyone has run into similar issues with the SendHubCommand…

*adding debug info from Wireshark capture: looks like the Multisensor comes back with a 400 Bad Request for some reason

Successful post into splunk:

POST /services/collector/event HTTP/1.1
Accept: /
User-Agent: Linux UPnP/1.0 SmartThings
HOST: internal.ip.addr:8088
Authorization: Splunk my-splunk-key-removed-for-privacy
Content-Type: application/json
Content-Length: 585

{“event”:{“date”:“Wed May 18 05:20:59 UTC 2016”,“name”:“switch”,“displayName”:“Downstairs Hallway”,“device”:“Downstairs Hallway”,“deviceId”:“92e89916-1834-47cc-ab74-fafc9603e8a1”,“value”:“off”,“isStateChange”:“true”,“id”:“4e3adee0-1cb8-11e6-a452-d052a87298f7”,“description”:“zw device: 05, command: 2503, payload: 00”,“descriptionText”:“Downstairs Hallway switch is off”,“installedSmartAppId”:“null”,“isoDate”:“2016-05-18T05:20:59.292Z”,“isDigital”:“true”,“isPhysical”:“false”,“location”:“myHome”,“locationId”:“0c6ee2e0-a119-135f-9b8e-27702b80e47d”,“unit”:“null”,“source”:“DEVICE”,}}HTTP/1.1 200 OK
Date: Wed, 18 May 2016 05:20:59 GMT
Content-Type: application/json; charset=UTF-8
X-Content-Type-Options: nosniff
Content-Length: 27
Connection: Keep-Alive
X-Frame-Options: SAMEORIGIN
Server: Splunkd

{“text”:“Success”,“code”:0}

Unsuccessful post:

POST /services/collector/event HTTP/1.1
Accept: /
User-Agent: Linux UPnP/1.0 SmartThings
HOST: internal.ip.addr:8088
Authorization: Splunk my-splunk-key-removed-for-privacy
Content-Type: application/json
Content-Length: 621

{“event”:{“date”:“Wed May 18 05:20:53 UTC 2016”,“name”:“temperature”,“displayName”:“MultiSensor UnderStairs”,“device”:“MultiSensor UnderStairs”,“deviceId”:“88c6e1d4-9573-49cd-a45f-93a937a87a93”,“value”:“74.5”,“isStateChange”:“true”,“id”:“4ae153a0-1cb8-11e6-87a9-062254430c79”,“description”:“zw device: 08, command: 3105, payload: 01 22 00 EC”,“descriptionText”:“MultiSensor UnderStairs temperature is 74.5…F”,“installedSmartAppId”:“null”,“isoDate”:“2016-05-18T05:20:53.690Z”,“isDigital”:“false”,“isPhysical”:“false”,“location”:“myHome”,“locationId”:“0c6ee2e0-a119-135f-9b8e-27702b80e47d”,“unit”:“F”,“source”:“DEVICE”,}}HTTP/1.1 400 Bad Request
Date: Wed, 18 May 2016 05:20:53 GMT
Content-Type: application/json; charset=UTF-8
X-Content-Type-Options: nosniff
Content-Length: 27
Connection: Keep-Alive
X-Frame-Options: SAMEORIGIN
Server: Splunkd

{“text”:“No data”,“code”:5}

… Few Hours Later …

After more debugging, I think I was correct in the length being a problem. The string that gets bounced is actually 622 characters long, however the HTTP request was putting in “Content-Length” of 621.

I ended up modifying the code and reordering some of the POST request to line up with a successful request, and added an explicit Content-Length in the header based on the json character length.

Here’s a snip:

def length = json.getBytes().size().toString()

sendHubCommand(new physicalgraph.device.HubAction([
method: “POST”,
path: “/services/collector/event”,
headers: [
‘Authorization’: “Splunk ${splunk_token}”,
“Content-Length”: “${length}”,
HOST: “${splunk_server}”,
“Content-Type”:“application/json”,
“Accept-Encoding”:“gzip,deflate”
],
body:json
]))

Will continue to monitor to see if I get my events

Need to figure out how to capture errors with the sendHubCommand.

1 Like