Thoughts on Industry Standards for Vulnerability Discovery Disclosures


(Jason) #1

This is a more specific discussion about community thoughts on the subject. These thoughts were getting to be more than a few clogging up some other threads.

The artical that paper that sparked this conversation is:
The other relevant thread would be over here:
Researchers say there are serious security problems in Samsung’s SmartThings

Continuing the discussion from SmartThings Platform Security - Response from Alex:

(Jason) #2

I guess I don’t really see how it’s a grey area. They did all of the work, it’s their data, they should be able to do with it what they please.

In the referenced incidence it’s really hard to ‘guess’ what happened

and what was said. For all we know ST approved of this going public, but even if they didn’t they clearly had Plenty of time to do something about it if they thought it was a real issue.

It would be a very slippery slope to say that Companies should have the right to control the stories about their security.

Don’t get me wrong, I’m not saying that All security vulnerabilities should be automatically published to the web either. I think Common courtesy would be to give the vulnerable company a chance to fix their issues, but they are certainly not “owed” this courtesy.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #3

Thanks for spinning off this Topic, Jason!

Let’s try to find some more “up-to-date” Disclosure Protocols that seem to be prevalent.

I don’t know offhand, if Michigan (@earlenceferns) referenced a particular protocol in their arrangements with SmartThings for this particular research and disclosure?

(Jason) #4

4 months is much longer than I would have waited to publish my findings.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #5

Yah… “the right” is not the correct wording. Perhaps … the “courtesy”? I’m not sure how slippery the slope is. Perhaps the researchers and the company would have to agree on the pre-press information content.

Obviously the primary purpose of advance disclosure to the vendor is to allow the vendor time to secure or remediate the problem before it is published for bad hackers to exploit.

At the same time, the vendor is possibly in the best position to assist their customers to self-mitigate: i.e., Vulnerability Discoverers / Researchers don’t have the means to contact all the customers, nor the relationship with the, nor the expertise in the affected product(s). While the vendor may be more inclined to PR damage control; that doesn’t mean they don’t have ways to assist customers in protect their own accounts, uninstalling untrustworthy apps, etc.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #6

Agreed… CERT’s policy is 45 Days (though they will sometimes (?) extend it if they believe the vendor is actively working on the problem and it remains beneficial to the public to hold-off publication and exploitation risk).


Here’s a reference from about five years ago. Around that time a lot of these conversations went in a different direction from the earlier ones. That’s because the people discovering exploits had divided into two very distinct groups. White hat hackers, who expected to be paid for what they found. And academicians who generally were publishing journal papers. very different interests.

The white hat hackers were usually just fine with the company handling all the press about it. In fact sometimes they required that their name not be disclosed. What they wanted was to get paid.

Academic researchers follow the conventions from that community, which is that if there is an expected publication date, no one else who is consulted about the topic is supposed to publish anything about it until after that publication date.

The earlier group were mostly white hat hackers Who were not yet expecting to be paid, who were mostly building reputations and, they felt, doing a public service, and who would notify the companies just as a courtesy, often only a few days before they released the information.

Between 2005 and 2010 the ethical shift was to give the companies more time to repair The glitch before anyone went public. But the serious academics wanted the right of first publication.

So as happens with many things, the culture evolved first, and that change the evaluation of the ethics.

(Jason) #8

Yes, I would agree that if it were an industry ‘courtesy’ it isn’t really a slippery slope. But then again our legal system has been known to turn ‘courtesies’ into a ‘right’ before.

If one person can do it, so can others…

(James) #9

I would consider this to be a grey hat hacker.


Generally in the security research community, we give notice to the vendor immediately after we complete writing up our paper (a paper is the research output – it embodies all new ideas, and it is what we value the most, besides of course trying to improve security). Therefore, we notified SmartThings in December 2015. We were happy that SmartThings responded positively that they are looking at the vulns, and that would seek to improve the security of the system. The general idea is that the vendor wants time to fix the vulnerabilities, therefore, the vendor would seek to delay announcing any details, since any hacker on the internet could potentially try and take advantage of the current state of affairs. However, we selected a date for our release (and co-ordinated with SmartThings) since the work was going to be public anyway within a couple of weeks (IEEE S&P 2016 is happening on May 23rd-25th, I’m presenting on the 24th). By the way, if you are interested, S&P is one of the most high profile security research conferences in the world (and also one of the most selective in what they decide to publish). Media attention is generally assumed at such places anyway.

I’m also seeing a lot of defensive posts here. Let me re-iterate that we view this as not a hack to break security purely for that sake alone, but rather as an opportunity to study these new technologies, determine how they fail in practice, and ultimately improve them. Before you can defend effectively, you need to understand the attacker, and the attack models.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #11

Definitely! It is considered unethical to request payment or bounty because that is in conflict with the altruistic purposes of deferred disclosure.

A white hat realizes there is genuine benefit to deferring disclosure for a “reasonable” period of time so that the vendor has an opportunity to acknowledge and mitigate the vulnerability before it is subject to wider exposure (and greater opportunity for black hat hackers to exploit).

Any conflict of interest (business partnerships, bounties not standard offerings of the vendor, etc.) throw the process into questionable territory.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #12

What do you mean by “defensive”? (I’m not being facetious, honestly; I’m trying to understand how you are classifying the various types of responses).

My personal feeling is that it is challenging for the gadget/tech media (including WIRED and others that have covered the story) to summarize your detailed research and conclusions in a way that is educational to Consumers. They need not downplay the risks, but, arguably, they tend to hyperbolize for the sake of attention (and clicks / ad revenue).

I am also not certain that SmartThings has provided a sufficiently educational response to this report.


I also wanted to mention that in the last few years many large companies have subscribed to hackerone’s “bug bounty” program where people who find an exploit are encouraged to report it through that service and they will indeed be paid if they are the first to report it. It seems to work out well for everybody and I don’t consider it unethical. To me, this is more like when a county government puts out a bounty for a particular invasive species and everybody who wants to participate in the collection gets paid for what they collect.

ZenDesk is one of the companies that participates:

Reporting Security Vulnerabilities to Zendesk

Zendesk aims to keep its Service safe for everyone, and data security is of utmost priority. If you are a security researcher and have discovered a security vulnerability in the Service, we appreciate your help in disclosing it to us in a responsible manner.

Our responsible disclosure process is hosted by HackerOne’s bug bounty program. Please visit our HackerOne portal located at to report any security vulnerabilities. Only vulnerabilities submitted there will be eligible for a reward.

If you previously responsibly disclosed a vulnerability to us, thank you. Our list of contributors continues to live on at Hackerone and can be found here:

Android itself has a bounty program that can pay a couple thousand dollars.

We’re launching Android Security Rewards to help reward the contributions of security researchers who invest their time and effort in helping us make Android more secure. Through this program we provide monetary rewards and public recognition for vulnerabilities disclosed to the Android Security Team. The reward level is based on the bug severity and increases for higher quality reports that include reproduction code, test cases, and patches.

Chrome, Google, Dropbox, Facebook, Pebble, AT&T, Deutsche Telecomm , Github, Instagram, PayPal, Spotify and many other major companies have similar published programs, each with their own rules and reward levels.

So I do think at this point as a general industry practice receiving a bounty for being the first person to report and adequately document a critical bug is considered both ethical and reasonable.

Note that this is all a big change from 10 or 15 years ago. But most companies welcome these reports and do find them valuable. :sunglasses:

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #14

If the vendor/company offers bounties before vulnerabilities are discovered and reported, then this is an ethical clever business savvy practice.

However, if vulnerability discoverers demand a “reward” for responsible limited deferral of public disclosure when no bounty was/is offered, then that is blackmail and unethical. If they offer extra deferral for extra rewards… Even worse.


It’s interesting that Samsung smart TV has a very generous bug bounty program. I wonder if the smartthings features will be covered under that? I would think The ones associated directly with the TV feature would have to be, but maybe they will count SmartThings as a “third-party” product/service, the way they treat the Netflix app.

I don’t know for sure if this would count as a “common industry practice” but I have noticed that products which are sold primarily to less tech-savvy consumers tend to have more generous bounty programs, probably because ones which are marketed to power users likely get a lot of reports turned in to their regular support channel.

A generous bug bounty program encourages white hat hackers to test and document products that they might not ordinarily spend that much time with.

( co-founder Terry @ActionTiles; GitHub: @cosmicpuppy) #16


I firmly believe that we might have a number of potential “white hat hackers” here in the SmartThings Community whose efforts would be incentivized by generous reward policies.

(Jason) #17

I don’t know that demand is the right word. Again it’s their data, They are the company here. No one but the owner of the data should be able to decide what they can do with it.

Look at these scenarios.

  • If my neighbor knows where to find buried treasure on my property, it’s not unethical for him to not tell me where that is. If he offers it for sale the terms would be his choice.

  • If I am selling a service to bury/hide others treasure in my yard, and this neighbor still knows the location of said treasure. The terms of him giving up this information still lies in his court.

I suppose it really depends on how they came up with this data.

  • If I advertise that it’s ok for people to come digging around on my property, or if my neighbor simply interviewed everyone that has ever been on my property to come up with the answer to where it’s hidden. He owes me nothing, but I would appreciate a heads up and more so pay for the information on how he found out so that I wouldn’t make the same mistakes.

The equivalent would be to say that I uploaded my tax documents to google drive with the permission set to ‘anyone with this link can open’ or ‘public’. and get mad if someone found out that link. I am the one who put it out there, Yes I would certainly prefer that if someone came across such a link to a personal document that they would notify me so that I can remedy the situation. Certainly it would be a Courtesy. They don’t owe it to me though, I am the one who made it available.

If they Offered to sell the information the the company for a price, with time to fix the vulnerability there is nothing unethical about that to me. If the company declines the offer, it would also not be unethical to then post the story. It’s also not blackmail or unethical to go right ahead with the story.

  • If it turns out my neighbor had to do something nefarious, like come onto my posted private property and physically dig around to find out where this treasure is buried, well the whole situation is different.

The equivalent to this would be if had the same document permission set to ‘private’ and someone found a public url to it.

To me it’s obvious that if laws were broken to acquire this data, It would be blackmail.

(Never Trust @bamarayne) #18

Thanks for the research, the engagement with ST and the follow ups here.

24 years in the security industry here, I have done some research, vulnerability disclosure in the distant past before there were well thought out responsible disclosure protocols.

My question for you, as the researcher here, where do you see the Smart Home vulnerabilities eventually taking us? I.E. there will always been vulnerabilities, where do you see the market in that regard ending up? What will be done with the vulnerabilities, etc. Of course this is speculative, just interested in your perspective on that. If you plan on covering that as part of your presentation, I would love to get a copy after it’s public.

I won’t be at S&P this year or I would come see you in person. I hit Blackhat, RSA and Sector annually and try to hit up a few others depending on schedules, etc.

(Never Trust @bamarayne) #19

There are many different types of defensive posturing I see here in the community in that regard.

The journalists are hacks. The vulnerabilities don’t matter, and if you think they do and you bank on line you are a hypocritical joke. Duh, only a dumb ass uses a third party app and expects their system to be secure. Security is a illusion. This type of stuff only serves to distract SmartThings and confuse the public. I hope that all of this research doesn’t force SmartThings to make the user experience cumbersome. Et Cetera.

Folks will literally accept a vulnerable SmartHome, so that SmartThings doesn’t institute proper security protocols that users feel will impact their experiences. That is until real world examples of vulns effecting the lives of those using Smart Home technology hits 60 minutes, then all the polyannas will scream bloody murder. It’s a constant cycle, this is nothing new. However, it got old for me long ago.


I think defense in depth is really the answer here, and where, in my opinion, SmartThings, and the smart home industry in general, should go. From what SmartThings told me, they have started tightening their vetting processes, which is a good step forward. Smartthings also arranged a video call with my team to discuss potential ways to address overprivilege, and we have provided co-operation going forward.

Vetting processes form the first line, proper privilege forms the second line of defense. The hope in this research is that we can get started on thinking of additional lines of defense. As you rightly say, there will always be vulns in systems (any kind of computer system), and the idea is to put on enough layers, enough hoops for the attacker to jump through, that the value of the information gains from a successful attack, is much less than the effort the attacker put into getting that information in the first place. OF course this is a balancing act. In terms of performance, in terms of usability. I’m seeing several arguments here that we missed the point in our work that users simply say yes to all permissions. But, I would like to point out that the SmartThings permission flow/installation UI is already an improvement over android style permissions prompts, because the install UI walks the user through the purpose of the app, explaining why it needs access to devices. In comparison to an android style system (install-time), this is much better because abstract permissions are simply not thrown in the user’s face. Furthermore, there is some research in the academic community that has some guidelines on how to better design such permission prompts (see the paper for pointers to those papers). Therefore, it’s not a binary argument. There are several levels of issues, and ways to tackle those issues.

Therefore, the vulns take us to a system with several lines of defense so that attackers must jump through several hoops to get anything useful.