I've written before about how important responsible disclosure is for Security Researchers, and that responsibility for an effective process for notification and remediation falls on both the security researcher and the vendor itself. When researchers find a vulnerability, they should work with the vendor to disclose it properly and to make sure it's fixed securely. I believe this should be done in good faith, at no cost, and without extortion. I think most professional security researchers are on the same page, and while we may debate whether it's prudent to ever publicly disclose an issue, most will try to use responsible disclosure first.
Software vendors have obligations as well. Creating a stress-free mechanism to disclose vulnerabilities to vendors is critical if vendors want to avoid regular appearances on bugtraq. Security researchers are donating significant time and expertise that would otherwise cost vendors thousands of dollars. While they aren’t provided detailed and formatted Problem Reports, it is incredibly useful information that is worthy of attention and timely remediation.
The increase in the number of bug bounties and disclosure programs lately is encouraging. A disclosure program is a way for a security researcher to disclose a security issue that they have found. Good disclosure programs have: Respect, Optional Anonymity, Legal Impunity, Security, Responsiveness, and Openness.
Respect is very important. Again, in many cases, these are professional researchers who found a product or software service interesting or critical enough to want to examine it’s security. They may use tools that are costly to build or purchase, and spend a lot of their free or professional development time performing various assessment techniques. When a researcher approaches a vendor with security vulnerability, it shouldn’t be ignored.
Anonymity (at the request of the researcher)
It may be important or desirable for some researchers to disclose their vulnerability anonymously. They may have stumbled across a security issue in a slightly less than legal way, but that doesn’t make the vulnerability any less important. At the other end of this spectrum, researchers may want their name to appear in disclosure notes, bug fixes, or other messaging. These researchers may be independent contractors and this may be a great opportunity for marketing for them.
Threatening to sue a researcher sends a negative message to the security community and ensures no other researcher informs a vendor of vulnerability again. Researchers are experts in their field and able to conduct sophisticated attacks that are likely more advanced than in-house testing efforts.
Security is important because of the sensitivity of the data that is being transferring to the vendor. It's important that the security issue that is discovered isn't intercepted by a malicious third party and used against end-users or customers.
Security Researchers want to know that vendors have received the issue, understand the risk, and have taken or will take steps to mitigate the risk. This is often the "payment" they're looking for. It is important to improve the security of the software, and they have the expertise to help. Knowing that the issue is being handled is important to them.
Sunlight is the best disinfectant.
No Software is 100% secure, as there is a constant need to balance risk with software's utility. It's impossible to understand the risk and to make an informed decision about software without this security information. Most users and security professionals understand this concept and automatically assume the worst, especially in today's climate of weekly massive data breaches. It's important to meet these concerns head on help customers understand the vulnerability, how it happened, what was learned, and how to ensure it never happens again.
A few personal interests of mine are incentives and motivation. As a manager of dozens of expert security engineers, and a security engineer myself, I love to think about what drives people to excel, build their skills, and conduct research. I've found that while money isn't necessarily the primary driver, it can help show that the incentives of the company are aligned with the things that each person is excited about.
At Security Innovation, we have a research program that allows each engineer to take up to 10% of their time to research anything security related, whether it be connected hardware locks, consumer devices, cloud infrastructure, etc. While the engineer would likely have been doing this research on their own, getting paid for it reinforces that the company values ongoing research and employees who are continually looking to hone their skills.
Bug bounties are similar to this. Some Security Researchers rely on bug bounties to make a living, but many see it as a great bonus to research they would already be doing. They also realize that if a company is progressive enough to create a Bug Bounty program they are also likely to follow the outlines of a high quality disclosure program like the one outlined above. This means that this company takes security seriously and welcomes the feedback to improve the security of their product.
If you are a software vendor, I hope you'll start a Security Disclosure program at your company. It's a great way to get security feedback on your product and to know that people care enough to provide you feedback. Creating a Bug Bounty program shows that you take this seriously and have a process for responding to security researchers.