Recently Mary-Ann Davidson of Oracle wrote a well considered blog post regarding the PCI’s contractual requirement that software developers of PCI certified applications disclose all reported or internally found vulnerabilities to the PCI council, which then may or may not pass that info to various levels of PCI members and payment processors. As Mary-Ann points out, this breaks responsible disclosure because the number of people receiving these reports is large enough to ensure they will not be a secret. I will not rehash the disclosure religious debate except to state that I am firmly in the responsible disclosure camp so I fundamentally agree with Mary-Ann’s point.

That said, despite the length and extent of her post, she over simplified the situation with a hard no – a vendor should never disclose until there’s a patch and should be able to fix vulnerabilities as it sees fit.  I think there are several scenarios where this is not the case, including:

  • A known exploit is circulating. In this case, the cat is out of the bag and it does help that information is disclosed. Mary-Ann points out that only the vendor can fix the patch and that most companies do not have the technical sophistication to provide work-around defenses, but her comment on this is highly generalized and the reality depends a lot on the specifics. In addition, even if an enterprise cannot implement defenses, they may be able to tweak various detection engines (IDS, DLP, SIEM) to know if they get hit so they can deal with it.
  • The vendor knows a good work-around defense, which is sometimes the case .
  • The vulnerability is in an updatable 3rd party or open source component. For example, OpenSSL is used in many applications and if an application includes a vulnerable version that the user can self-update before the vendor packages the patch and tests for all cases, this information should be made available.

Even with these exceptions, software vendors fight many competing trade-offs. As a product manager of security software products for the last couple decades, I have faced them myself. There is enormous pressure that works against fixing vulnerabilities, especially in large organizations like Oracle (although I do not have any insight about Oracle specifically) and the pressure of disclosure does make a difference. That’s why the responsible disclosure process is so important. Hiding vulnerabilities is a bad idea. Depending on your religion on this you either think all vulnerabilities are known to some bad hacker, or that only some are. Nobody believes that by just ignoring and hiding vulnerabilities we are better off. And yet, for many, many years, the largest, most influential software vendors did not step up and address the problem and many software vendors both small and large still do not. Oracle was slow to own up and is by no means perfect now. I know, my last job was at a company that did security research and reported vulnerabilities to Oracle – responsibly, of course. 

The reason I pointed out some of the exceptions above is to show that the situation is not simple and does not always break down to a hard policy yes or no. Those of us actually tasked with deciding what to do in these cases have many factors to consider and balance including exploitability, real-world applicability, competing quality priorities (should we fix that crash or vulnerability, which would customers care more about? I know, both, but in which order and what release?), competition for resource with competitive and functional needs, etc. There is never enough engineering and testing to get everything done.  Under a responsible disclosure process, vendors have some time to make the trade-offs, but are not let off the hook,  whereas disclosing vulnerabilities without patches can on balance cause more harm across the market than good – depending on the specifics of the vulnerability, of course.