Developers need to be trained in secure coding techniques

The national nightmare created by the troubles with healthcare.gov has been bouncing around the airwaves and creating a thunderstorm of political outrage. But, in reality, what our federal government has is a poorly functioning website that had consumed an enormous amount of monetary resources and manpower. In some IT departments, this is also known as “standard operating procedure”.  Yet due to the website’s highly visible nature, it may seem as if healthcare.gov is the first website that has struggled to work correctly once it has gone live. It isn’t the first and it won’t be the last.   However, it is a great opportunity for the software development industry to absorb some security lessons learned in hopes of not repeating the mistakes of the past…

Lesson #1: Complexity is the Enemy of Security

It has been reported that the Healthcare.gov website contains over 500 million lines of code.  In comparison, the operating system, Windows 8, has been rumored to contain somewhere between 30 and 80 million lines of code. 500 million lines of code is probably a poorly exaggerated estimate (I hope!), but even if the true total lines of developer written code is only 10% of that estimate, Healthcare.gov is a very complicated website. No matter how loud management screams “Failure is not an option”, when a project of this complexity is not given the time needed to design, develop, and test, failure is almost assured. …And if the functionality of the website fails to work correctly, I feel it is safe to assume that the website is also incredibly insecure Reports of the website using a non-industry standard database are also worrisome since the developers may not have been very familiar with how to write secure code that interacts with the database.

Lesson #2: If you don’t train your developers on how to write secure code, you can’t act surprised when they write insecure code

Over and over again, we see the same mistakes being made.  In a rush to complete project after project, developers are not given enough time to learn secure coding techniques because management believes they are “too busy”. Since most developers are graduating from software engineering programs without a rudimentary understanding of secure coding techniques, it’s folly to assume that they are somehow learning this information on the job when they are under the pressure of an unrealistic project schedule. Yet, when security vulnerabilities are found, executives and managers act surprised by their presence. …It will be interesting to observe how long this expensive and wasteful business practice continues

Lesson #3: Functional Testing is not the same as Security Testing

Testing is often the phase of the software development process that is given the least amount of time. In a rush to push applications into a production environment that is routinely attacked by skilled adversaries, the testing process that could expose the flaws in our applications is often given little value. Yet, even when thorough testing is performed, it focuses on the functionality of an application working correctly. This is important, but this process will rarely find the security vulnerabilities within the application.  Functional testing attempts to ensure that the application successfully executes the actions described within the application’s requirements document. However, security testing attempts to find where an application performs actions that it is NOT supposed to perform and are not found within the requirements document.  This is extra functionality within the application or side effects of poorly written code. …But if an application is pushed into production and then found to work poorly due to lack of testing time, we can often assume that it also very insecure