Responsible DisclosureThere is a wide range of ways to disclose vulnerabilities discovered in software. Some people believe it is best to immediately alert the public of a vulnerability as soon as it is found, while others feel it is best to quietly work with the software vendor to fix the vulnerability before public notification. There are opinions that range between these two extremes, but responsible disclosure leans toward giving the software vendor a reasonable amount of time to make a fix before telling the world about the problem.

Publicly disclosure starts the race for attackers, victims and developers at the same time. As soon as attackers catch wind of the problem they will start weaponizing the vulnerability by creating malicious exploits of the vulnerability.  In response, software teams will scramble to fix the problem and deploy it to their users. Users of the system have to stay on their toes to install the patch and perhaps even modifying firewalls and intrusion detection systems to mitigate their risk.

This is, quite often, the popular mechanism for vulnerability disclosure in the hacker community. It is exciting, gathers lots of publicity and puts a ton of pressure on software vendors to get improve their security. Nothing like a trial by fire to get people’s attention! While most users are put into an uncomfortable position by this type of disclosure, their is small sub-set of users who deeply understand their Operating Systems, have built their kernels from scratch and understand every line of code they run. These super-users want to be alerted of any potential vulnerability as soon as it is discovered so they can take quick action to fix their own systems.

Unfortunately, most users do not closely follow the latest vulnerabilities and do not know how to configure their firewalls, if they even have one. Additionally, if they use open source software they might not know how to best patch their own software. The average user is running in a race, but didn't receive the invitation. Public disclosure puts the average user at undue risk and under intense pressure.

After a public disclosure, the race is on between the software development community and the attackers. The big question is, can the developers accomplish each of the following before their user’s are exploited?

  • Find the issue in their software
  • Fix the issue
  • Develop a patch
  • Code review the patch
  • Test the patch
  • Deploy the patch

Even if the development teams and security teams work in unison and accomplish all of this before the next Slammer or Nimda, it is still a major challenge to get all of their users to patch before they open that e-mail, click the link or receive that packet which will result in an exploit.

In most cases, an exploit is ready before a patch is deployed. In most cases, even after a patch is deployed, a large percentage of users will not pick up the patch for months or even years.

One thing Open Disclosure does do, and does well, is to put the fear of attack at the forefront of the development organization's minds. This fear makes them more likely to take security seriously early in their SDLC and take the right precautions before attack. This is why I refer to the security community as an ecosystem. Every actor has a role to play.

I am not talking about a breach like the recent Sony breaches where once they realized their networks were compromised, they notified customers. It is incredibly important for companies to alert their customers and all other involved parties after a breach has occurred so they can take appropriate action.

I’m talking about when researchers discover vulnerabilities during their use of software, the same way an experience car mechanic will recognize that sound you've been pretending not to hear for the last six months. What that researcher does with that vulnerability next changes the game that we're all playing. Release it to the world (Open Disclosure), get noticed, make press, but put customers at risk; or release it to the software vendor only (Responsible Disclosure) and give them a head start at fixing it before notifying the world.

At Security Innovation we take the security of our current and future clients very, very seriously. This is why we practice "Responsible Disclosure." Actually, we take it a step further, as we will not publicly disclose any security vulnerability.

The formal definition of Responsible Disclosure is to give the software vendor access to the vulnerability before releasing it publically. The deadline pressures the vendor to respond directly to the security researcher and push out a patch to that vulnerability before anything else. This assumes the security research understands the vendor's business and their customers better than the vendor itself. This still gives the researcher the limelight when they go to the press 10-15 days later.

We feel it is our primary concern to make sure the vendor gets the issue completely fixed. It's not our place to set arbitrary deadlines.

We deliver the vulnerability, along with guidance to remediate it, to the software vendor for free and will not release it for any period of time. We work with those companies to make sure the issue is remediated properly and that their development and testing team understand the risk and impact of the issue.

We do this even if we have never worked with this company before, and may never work with them in the future.

We feel this is the best way to help the end user. At the end of the day our goal is to help software companies ship secure software that their users can trust.

When we find issues in other people's software it's our job to alert the company who wrote the software. I send out multiple e-mails per month to this effect. These e-mails range from simple XSS issues, to SQL injection to remotely exploitable Buffer Overflows.

In the midst of Lulzsec, Anonymous, countless viruses, worms and targeted attacks I think it's important to be playing for the customer - even if that means missing out on the opportunity to be famous.