Our upcoming Ed Talk is titled, "Steal the Attackers Playbook with Purple Teams." View this talk to learn how to embed an exploit mentality into technical teams, which results in a reduced attack surface, fewer security vulnerabilities, and accelerated feature release.

You’ll be hard-pressed to find someone in AppSec to say that things haven’t changed tremendously in the last decade. From programming languages to deployment environments to the skills required of the teams that bring it all together. And with release cycles continuing to accelerate, that change isn’t going to slow down anytime soon.

One trend growing in popularity is the concept of Purple Teams and shifting security to the left and to the right in the complete software development life cycle, including operations. Software must be defined, assembled and deployed securely. Shifting left moves security activities and responsibility as early in the software build process as possible, where proactive controls can be strategically implemented - during the defining, designing and building phases. Shifting security right is often a defense in depth strategy that relies on detective and compensating security controls - properly implementing web application firewalls (WAFs) and content security policies (CSP); securing APIs in production, and configuring the micro-services and cloud features that are so popular.

Integrating security not only into the broader SDLC, but also into the skill sets of the different teams involved will be increasingly critical as security risks grow and deadlines to move new features live become shorter. This is a key strategy for teams that want to be truly agile and successful. Waiting until features are deployed and live to conduct security assessments can put customers at unnecessary risk, increase remediation time, and create friction between teams. I recently sat down with two AppSec experts who share the same approach to security and the value of Purple Teams, despite their varied backgrounds.

As a former developer turned security professional, Trupti Shiralkar has successfully led “shift-to-left” transformations of security programs to solve challenges in open source software security, application security, cloud security and applied cryptography domains. She is a frequent AppSec speaker and holds a patent on secure anonymous e-voting.

Ed Adams began his career as a mechanical engineer before transitioning into software quality and security. Despite over twenty years of experience, including management positions at Rational Software, Lionbridge, Ipswitch, and MathSoft, Ed still holds true to some of the key principles of mechanical engineering. He has served as the CEO of Security Innovation since 2003.

Security Innovation: In the mechanical engineering world, threat modeling and abuse case scenarios aren’t optional exercises – nor are they done by a completely separate team. Why is the mentality different in the software engineering world?

Ed Adams: When I was a mechanical engineer, we followed very rigorous processes of taking requirements and including a “factor of safety” into our designs from inception. We then tested the design and iterated (often multiple times). Only then did we even consider building a prototype or production system. I came from that world into the software engineering world and I thought everyone had lost their minds!

There was no concept of safety/security as part of software quality. There was no consideration of abuse cases and disaster planning as part of the design phase. It was build-ship-fix. Things have improved somewhat in the twenty years since I made the transition but not everywhere and concepts like threat modeling are still mostly foreign, even in many large enterprise development/product teams in my experience.

Trupti Shiralkar: In the mechanical industry, the stakes are high and safety as well as security are fundamental building blocks for software development. The mentality in software engineering is different because of the end-goal and the skill sets of the teams. The software industry follows rapid prototyping and agile SDLC to incorporate feedback quickly and deliver a minimal viable product (MVP). In the standard structure for software engineering, security considerations by default are not included and is generally only an afterthought.

It’s not until the product becomes mature or the customers start asking for specific security features that the need even becomes evident. The general mentality is that everyone wants a product or feature that works, rather than a secure product or feature - even though the two are not mutually exclusive.

Security Innovation: In the old software days, developers wrote a ton of soup to nuts code, flipped it over to the security team to test, then shipped to customers or tossed over the proverbial wall to IT. Today’s software is more assembled (with 3rd party “stuff”) and released immediately in the cloud. What new skills is this requiring of development teams?

Trupti Shiralkar: In today’s continuous process of development and delivery, the emphasis on shipping products faster is of paramount importance. Separate security testing only slows that process. Therefore, from day one developers should be skilled in security design best practices, secure coding guidelines as well as common attacks, that enables them to build and ship products without the separate security team intervention.

Ed Adams: Today, between 75-90% of enterprise applications consist of 3rd-party and open-source software, i.e., code your teams didn’t write. Plus, consider deployment environments — let’s just take the cloud, as one example. Most of that infrastructure is software as well. Your application must be assembled securely and deployed securely.

Software teams need to be able to conduct software composition analysis (SCA) to understand the “ingredients” that went into the application they assembled, and then fix any problems SCA flagged, e.g., outdated open-source software, unpatched 3rd-party libraries, etc.

And if you don’t have access to the source code, e.g. a binary from Github, a 3rd-party authentication library, or an AWS WAF — software teams need to assess the security of that component and whether or not it’s configured properly. This requires “black box” security testing skills.

And the further left one can push this activity, the faster the whole SDLC can go. Imagine a break/fix on an assembly line for a car that finds a flaw in the engine cylinder head after the car is fully assembled. What if the flaw were caught at the time the cylinders were bought from MZW Motor? It would be much less costly to fix and get back to the business of building the car. Development teams need to do security component testing as early as possible. This is software security in 2021. It’s very different from even 5 to 7 years ago.

Security Innovation: Many organizations still are of the mindset that development teams don’t need to know how to hack. What advice would you have for those organizations?

Trupti Shiralkar: My advice to those organizations is do not wait for breach to take place to appreciate the importance of security. In the world of security, anything can be a weakest link including system, network, application or humans. And, one or more AppSec engineers are not sufficient to protect all of the products and newly developed features.

Scale AppSec by training your developers. Empower your workforce with knowledge about security and attacks. The more trained developers that you have, the fewer vulnerabilities that will occur and the cost of remediation will be less. Security tools help with integrating security assessment in the build pipeline, however security-focused developer training, especially on cyber ranges helps multiple developer’s security IQ.

Ed Adams: My advice is to think about the unique opportunity organizations have to leverage their development teams as assets or otherwise they relegate them to liabilities. Development teams are capable of contributing to robust security outcomes in a powerful and effective manner – if done the right way. They’re in the best position to find and remediate vulnerabilities, which reduces the burden on already overwhelmed security teams and allows them to get features out the door faster and with confidence.

Security Innovation: Why do you think the Purple Team security approach is increasing in popularity?

Trupti Shiralkar: Red (attack) Teams and Blue (defend) Teams have their own advantages and limitations. What Purple Teams do is create a feedback loop between the two and fill the gap. As a result, attacks are mitigated proactively in the early phases of SDLC. A developer ensures the design as well as code can protect the application against not only common attack but also advanced exploitation techniques.

“Shifting left” does not just mean making developers think about security. It actually means weaving various security processes into the agile SDLC. This includes security education and training not just for developers but QA engineers, program managers, product managers, rolling out security tools as part of build pipeline and integrating security throughout SDLC.

Ed Adams: Software development teams have historically been color blind by design. They don’t think in terms of Red Team or Blue Team. They just think in terms of build, not whether or not what they’re building is secure, what attacker techniques will be used on their application, or if their code can withstand those attacks.

For organizations seeking a more effective security model for software development, Purple Teaming puts more responsibility on the builders to consider abuse cases just like the mechanical engineers that design physical systems do.