Constraining vs. Training Developers – Not an Either/Or Decision

Posted by Ed Adams on June 20, 2013 at 2:34 PM
Find me on:

Other than the mistake of simply NOT educating developers about secure coding, it seems that some organizations continue employing unproductive ways to try to get their development staff to code securely.

Some organizations want to make security invisible to developers so they leverage an array of frameworks and pre-written libraries/routines for things like input sanitation, authentication, cryptography, etc.  While this is a good way to ensure developers are doing some things right, frameworks themselves have known vulnerabilities and developers still have to write glue (integration) code to tie in the business logic (not to mention the rest of the functionality) - and it is VERY easy to write insecure code if you aren't trained properly. The implementation of security during development can “feel” invisible; however, implications and importance of security should be quite visible to developers.

Constraining developers is a good thing, but doing so without providing any context or training doesn’t really reduce your risk of bad things getting into code.  It’s arguably the least productive way to “train” developers because you aren’t providing any enlightenment.  In fact, you're likely propagating the find-and-fix cycle of insanity that exists in most organizations today: run a scan, report the vulnerabilities to the developers who often don't know how to fix them or “try” to fix them and end up creating new vulnerabilities; run the next scan, same vulnerabilities recur (and likely some new ones), send report back to developers, same result, pit of despair, AppSec cycle of insanity…….. ahhhhh shoot me in the brain!!!!

Another ineffective way to “train” the development staff is doing it internally through the security team, who often have Network/IT security backgrounds and lack development proficiency.  There are very few organizations that have the secure development domain expertise and effective teaching methodologies to be able to get developers to absorb what they need. And if it isn’t a developer doing the training, you’re bound to get questions that can’t be answered, which will frustrate the developers even further.

On the flip side, one of the best methods to get teams to code securely is to give developers context! Most people don't want to believe this, but developers want to write high quality code.  Make security another (albeit most critical) aspect of software quality, just like functionality, performance, and reliability - and speak in terms of bugs, which they can relate to.  Keep in mind too that most software developers are engineers - if you're asking them to do something, give them a reason why and then provide the method(s) for them to execute.   It doesn't take any longer to write a line of secure code vs. a line of insecure code, developers just need to know which one to write.  Developers went through similar training when coding for performance, reliability, etc.  Security may feel new to them, but it is analogous to things they have gone through in the past as quality continues to become more paramount throughout every aspect of software development.

Security professionals often think it's sufficient to have a policy statement as simple as this: "Write all web applications so they're not vulnerable to common threats, such as the OWASP Top 10."  That “sounds” powerful, but it means NOTHING to a developer. The developer first has to figure out, What is the OWASP Top 10? Then, for each of the 10 threats, e.g., "What is SQL Injection?" "How does one defend against SQL Injection?"  "Which of the numerous ways to defend against SQL Injection should I choose -- input sanitation? Parameterized SQL statements?"  "If I choose input sanitation, how do I sanitize input?"  "How do I sanitize input in ASP.NET 3.5?"  

A recent Ponemon Institute Research Report found such disparity between development and security teams when asked questions about AppSec policy, remediation requirements, standards for building security into applications, etc. If you are interested in a copy, you can download it here

Sequencing is the key to success. Do not try to adopt a complete secure development  lifecycle (SDLC) all at once as you are likely to fail. The answer to "Where do we start?" is contextual to each organization. Some will find it more beneficial to start with security in the test phase (i.e. stop the immediate bleeding by ensuring you’ve uncovered the most critical vulnerabilities in your application(s), and then start to educate developers about these security bugs in the defect management system).  This is often the case where organizations know they have lots of old code, internet/high-risk facing applications, etc.   Others find it easier or more efficient to begin in the requirements phase (i.e. define security requirements in addition to the functional requirements because developers are driven by the application spec and will implement what is covered therein verbatim). I've seen both approaches be effective. Generally speaking, though, organizations just embarking on a secure SDLC program should focus on testing (end of lifecycle) for security. There is good automation support and training for this (which should happen BEFORE you purchase a tool!) The discovery and documentation of vulnerabilities will serve as a forcing function for developers, architects, and those who set the requirements to build more competency at creating secure applications. 

Lastly, a personal gripe of mine is that threat modeling is under-utilized, yet a highly leverage-able activity. We practice it every day in our lives and don't realize it (why do we lock the door when we leave our home or put an alarm system on our first floor windows but not the 2nd floor?) but most don't practice it when writing software. It's easy too! Remember the 1980's movie War Games with Matthew Broderick? There is a Blue Team that defends and a Red Team that attacks. Take 2 to 3 of your most creative QA/test team and make them a Red Team. Have them come up with creative ways to attack a given application, access sensitive data, take the system down/offline, etc. You'd be amazed at what people come up with! The best part is, you don't need any fancy tools or consultants to do this... all you need to do is think! :)

I also wrote a whitepaper on Threat Modeling for IT Risk Management if you are interested. 

Topics: application security

Ed Adams

Written by Ed Adams

Ed Adams is a software quality and security expert with over 20 years of experience in the field. He served as a member of the Security Innovation Board of Directors since its inception in 2002 and took over as CEO in 2003. Ed is Research Fellow at The Ponemon Institute, serves on the board of several IT security organizations, and was named a Privacy by Design Ambassador by the Information and Privacy Commissioner of Canada.