Thoughts on Industry Standards for Vulnerability Discovery Disclosures

I said and meant “demand” very specifically. If I find a vulnerability of a vendor via any means (including research) and demand that the vendor pay me to withhold or defer publication of my finding, that accurately fits the definition of blackmail. By demanding payment (“or else!”) I am threatening injury (giving information to hackers or just hurting the vendor’s reputation) to the vendor in exchange for financial (or other) gain.

(This does not apply if the vendor already has a policy of paying for vulnerability discovery and private disclosure.)

Blackmail:
The crime involving a threat for purposes of compelling a person to do an act against his or her will, or for purposes of taking the person’s money or property.
The term blackmail originally denoted a payment made by English persons residing along the border of Scotland to influential Scottish chieftains in exchange for protection from thieves and marauders.
In blackmail the threat might consist of physical injury to the threatened person or to someone loved by that person, or injury to a person’s reputation. In some cases the victim is told that an illegal act he or she had previously committed will be exposed if the victim fails to comply with the demand.
Although blackmail is generally synonymous with Extortion, some states distinguish the offenses by requiring that the former be in writing.
Blackmail is punishable by a fine, imprisonment, or both.

NB: It doesn’t matter how the threatening information is obtained (or what the information is); it is the act of using it as a threat to demand hush-money that makes it “blackmail”. If I see you fooling around in public with a woman who isn’t your wife, and threaten to tell your wife unless you pay me $1000, that is the same as discovering a security vulnerability and threatening to take it to the press unless you pay me $1000.

The only ethical choice is to disclose the vulnerability with no requirement for a reward (unless the reward is generally offered by the vendor with no coercion), but it is also considered ethical to give freely the vendor a reasonable and unbiased and firm advance notice.

2 Likes

No doubt.

Do you have any thoughts on what real world exploits of smart homes may eventually look like? Who the attackers are, what their motivations may be, for what purposes would physical layer actors purchase vulnerabilities and owned systems? Etc.

Monitization has drastically changed the security and threat landscape over the last few years, and I can’t help but wonder how that dynamic and dynamics like that may end up shaping this corner of the world.

1 Like

It seems that more than a few online sites agree with you. Personally I still believe that it’s up to the holder what to do with that information. If it were me, knowing what I know now, I would let the company know when I plan to go public with the vuln specifically not bringing up any terms and let them come back at me if they’re interested.

I was never on the “or else!”, I was on the “i’m going to do it either way, If you want to fix the problem first here’s what I require”

This information not coming to light does the consumer no good.

Given that I would give them a chance to fix it, if they declined the opportunity it would be them hurting their own reputation. Why does a companies reputation come before the public’s good?

The difference is that the vuln could affect Many innocent people. If one person can find a vuln than others can. Disclosing it to the public can give the public the option to not use a product that could be compromised. This may damage a companies reputation, but it could save a persons identity.

As an aside, If you see someone cheating on their spouse, tell their counterpart - I know I would want to know…

We are on the same page with respect to the concept of “Responsible Disclosure” (it is a pretty solid protocol, actually).

I never argued that eventual disclosure is the ethical obligation of the researcher / discoverer.

The only thing that would make it (vulnerability or mistress disclosure…) unethical and even possibly illegal blackmail, is if the discoverer makes deferral / delay of disclosure contingent on payment from the company.

That specific self-serving behavior (even in the context of the greater good) makes the whole concept of altruistic Responsible Disclosure fall apart.

The “grey area” is if the researchers / media / discoverers try to negotiate intangible rewards; the right of first publication? a job or security consulting contract offer? a deal on a business partnership?.. etc. Personally, I think that all / any of these cross the ethical line, though I’m no angel; hypothetically I might leverage such information in this scenario.

I’m not really sure. In our work, we showed some possible attacker techniques. I suspect that as these platforms become more “appified”, we might start to see malware style apps that plague Android and iOS, and then we might start thinking of things like botnets of smart homes etc which lead to a financial incentive for attackers beyond the obvious “steal property, do damage, kidnap, and backmail”. And, going further, I think lessons learned from the smartphone security wars would be very applicable to the smart home. Furthermore, since all of this technology is still new (generally speaking. SmartThings itself is reasonably mature since it was developed about 4 years ago, but newer systems are coming up like google’s stuff, and amazon’s stuff), we are in great a position to design these things with security as a core principle.

1 Like

One thing to note about responsible disclosure is that not all companies have the consumers in their minds. When researchers decide to make their findings public first, then that media attention sometimes forces the companies to do the right thing, or sink. Chances are, they do not want to sink :wink:

SmartThings of course, was very good in co-operation and engagement with us, and they gave us the impression that they genuinely care about their customers. So, for the record, there were no talks of “who goes first” to the media, in this specific instance. The only topic of discussion was understanding the vulns, and how to fix them.

3 Likes

Correct … that’s why “Responsible Disclosure” policies always include one or both of the following conditions:

  1. The company acknowledges the vulnerability and is generally cooperative with the researchers / discoverers.

  2. The researchers / discoverers set a firm deadline for resolution, which is not influenced by any personal gain. This introduces the possibility of a grey area, though, because protocols vary widely in their opinion of the amount of time that is reasonable to give the company as that deadline. “Rain Forest Puppy Policy” says 5 days; “CERT” says 45 days; you chose over 4 generous months (December to May). Realistically, the amount of time really depends a lot on the circumstances. A really dangerous vulnerability compels a shorter timeline and closer monitoring. If the company is genuinely moving forward with mitigation, then deferring publication may still be in the public interest.