The proliferation of online/accessible devices has seemingly touched every sector of society. As organizations move from a controlled, closed environment to the always-on 24x7 digital world of today, the products they make and use have also evolved. Of course, along with these new platforms come new threats. Earlier this year, I had the pleasure of having Rajan Gupta on one of my Ed TALKS shows, and I wanted to share some of his words of wisdom.
Rajan is the Chief Product Security Officer (CPSO) for Honeywell Connected Enterprise (HCE). When I thought about Honeywell in the past, I thought about "big iron" machines and industrial control systems (they still do all that, by the way). Today so much is connected that the company started HCE specifically to drive digital transformation across the whole company. They have become an industrial analytics SaaS company. I often say that pretty much everything these days is run on software, even most hardware… considering that of HCE's 3,600+ employees, more than 50% of them are software engineers, I'd say that is certainly true here.
Watch Now: Ed TALKS: Scaling AppSec – Getting Tools to Perform
Modern application design and the continued adoption of DevOps expand the scope of automated security testing and push tools to the limit. At the same time, complex platforms like IoT and Blockchain require more specialized tools and skills. Hear how product and appsec pros plan to scale software securely in 2022.
Rajan has a profound engineering background. He spent many years running dev teams and building products; however, in the last decade, he moved into product security and cloud security. Before HCE, he was at Equifax and was part of that massive software security transformation program, effectively an organizational merger between two historically disparate groups, at least operationally: software engineering and cybersecurity. Rajan believes that the keys to scaling AppSec are taking an evolutionary vs. revolutionary approach, managing the messy world of SBOMs & COTS, and tracking only essential metrics. His approach is a conservative, measured one in which he meticulously works to avoid the typical "try to boil the ocean" pitfall.
Slow and Steady Wins the Race
"You need a plan" were the first words Rajan said when discussing how to transform and scale software security across an organization. After the infamous breach in 2017, Equifax began a $2 billion cloud transformation journey. They adopted the Spotify model and formed themselves into alliances, tribes, and guilds, which allowed focus but also created the communication channels for a large, distributed company to align on strategic priorities. A dedicated product security organization was created, which focused on shifting security left into design – more so in the hearts and minds of engineers who wrote code by coaching and training them on solid design skills for security. Design drives code (ideally), so getting the architecture and requirements right was critical for Equifax.
They started by bringing in more education to create awareness. They exposed teams to OWASP ASVS (Application Security Verification Standard), a framework of security requirements focusing on defining software security controls and standardizing security testing. In Rajan's world, testing equates to validation of requirements and controls. How many of us can say that? I often see testing that equates to the beginning and end of all security vulnerability discovery. And all too often, that is far too late.
Rajan's software security team picked a couple of initiatives that built upon company-stated corporate goals that could align with what the engineering teams were already doing. "It is much easier to stand up an elephant that is rising than to stand up an elephant which is sitting down." Carefully chosen initiatives were selected for some quick wins as opposed to transformative, totally new activities.
One of those carefully selected projects was an Engineering Handbook. They took the opportunity to essentially integrate security right into the SDLC process and, importantly, write it down (document it) as a team working with partners from different engineering organizations. This was not done in a silo, and Rajan's team was purposefully vocal about it not being a security initiative; instead, it was a technology initiative that needed to be built together. Involvement was the key. This was a catalyst for his product security leaders to embed themselves within the engineering teams rather than sitting outside and issuing commands (which never seems to work well). The biggest change this wrought? Security is just good engineering… that was the mantra. Suddenly, security was nothing special, no "dark magic,"… just sound engineering. So everybody started to talk about it.
While Rajan considers tools necessary to do specific tasks faster or better, he feels teams need to know how to use the tool, interpret their findings, and do the job correctly with proper knowledge. He also acknowledges that they need to be integrated into the development processes, which is more than just adding them to the pipeline. Hence his team worked very closely with the DevOps team and the engineering teams to get specific tools adopted.
"Many times in security, we just bring in a tool and expect it to work without investing in the people behind it - that's a recipe for failure. Tools should only be adopted when/if they directly support the OKRs [objectives & key results] and accepted performance indicators of the organization."
Read Part 1 of Our 3-Part Scaling AppSec Blog Series
In the same episode of Ed TALKS, Dustin Lehr, Director, Application Security, Fivetran, also provided some of his best tips for scaling application security.
Read Now: Keys to Scaling AppSec: Building Relationships & Security Champions
Software Bill of Materials (SBOMs) & COTS
Since software is more assembled than "coded" these days, Software Composition Analysis (SCA) and Open Source/commercial-off-the-shelf (COTS) scanning tools are getting a lot of attention. The value of these tools is less about finding specific vulnerabilities and more about what vulnerable components software might have unknowingly. An analogy to think about is a can of soup. You pick a can of soup up, turn it around, and on the back – there are ingredients. Well, can you do that for a software application?
Rajan has a risk-based and practical take on this,
"It's kind of difficult, especially in the DevOps/CICD world, where you're constantly changing those ingredients. The goal is to track dependencies between COTS, third-party, open-source, and home-grown software components and associated licensing issues. Ideally, you can also report more easily on known security vulnerabilities. Open source is hard because the way the software is built today, 85% of software is open source, and software composition analysis tools or content analysis tools ends up finding lots of vulnerabilities in them, but they don't really tell you whether they're actually suspectable. Often, we have the urge to integrate SCA and other scanning tools in the pipeline and break the build if there's a critical or high flaw. But things get complicated with cloud development and deployment. Using a grilled cheese sandwich as an example, with containers, it's very hard to replace the layers until you rebuild the entire sandwich, versus a virtual machine where you can patch any of the layers independently. With COTS, it's hard to decipher what is actually inside; plus, you probably don't have a license to fix it, even if you know about it. That leaves you with trying to get that accepted in contract/license agreements. At HCE, they try to put a right to scan into every COTS contract and require the vendor to fix those vulnerabilities if/when they find something. That is a tricky path to navigate, especially for smaller businesses who don't have much leverage."
Metrics
The key to valuable metrics is determining your starting point and acceptance criteria. It's tempting to define goals in the 90%-100% realm, but that is rarely realistic and won't serve a useful purpose of improvement. Rajan's philosophy is to hit that 60% and then move on to another KPI, as the effort to advance a KPI from 60% to 100% is usually not worth it, which doesn't mean you're lowering your quality bar per se. It means you're aiming for quick wins to build momentum.
He also advised keeping people out of the metrics collection process. "Don't rely on humans because if you do, you're invariably going to get inaccurate metrics, and nobody will believe them." Try to take what's available from the tool and use that. It may not be perfect, but it will at least be a consistent plane of reference from which you can adjust changes. Some metrics he finds valuable:
The Number of Scans
- Let's say you're starting a new program and want to adopt some tooling. Start with the total number of scans you're doing every month. Almost every tool will tell you how many scans have been performed that month, be it static code analysis, open source container, etc. Don't worry about critical and highs first. Get the tool adopted and track the number of scans going up, indicating usage. Often, a development team would stop using a tool because it is not working for them. Even if they collaborated with you and picked the tool, you could not have thought of all the different programming languages and development methodologies they use. If a team's usage drops or is zero, dig in to find out the problem. Solve that small problem and iterate.
Manual Vs. Automated Scan
- If you're using the CICD pipeline, start measuring benchmarks like how many scans were done with automation versus manually. So, you have gradual maturity that you can keep getting the teams to use more and more automated scanning over time. Work with the development teams to gain agreement on what "good" looks like. Be on their side and understand those gates; it is critical.
Coverage
- Other things to consider are from a Web Application Firewall (WAF) standpoint. Some people come from runtime security or operational security. They will care about what percentage of endpoints are covered in a blocking mode by a WAF. From a virtual machine standpoint, you can ask what percentage of virtual machines have your vulnerability management agent on them, like a Rapid7 or Qualys.
Getting back to the Basics
Rajan emphasized that security's most significant issues are not the presence of unintentional vulnerabilities or tools not finding stuff. Instead, it's the persistent absence of basic security controls and the lack of awareness within engineering. I couldn't be in a more violent agreement. In fact, my very first Ed TALKS focused on security principles and how there's a waning emphasis on learning and implementing these. Principles are the security glue that holds various tech stacks, processes, and job functions together. They all have a hand in securing software. But without awareness of basic principles, there is no common language or mentality – that's where things break down massively … systemic design flaws are deadly.
About Ed Adams, CEO
Ed Adams is a software quality and security expert with over 20 years of experience in the field. He served as a member of the Security Innovation Board of Directors since 2002 and as CEO since 2003. Ed has held senior management positions at Rational Software, Lionbridge, Ipswitch, and MathSoft. He was also an engineer for the US Army and Foster-Miller earlier in his career.
Ed is a Ponemon Institute Research Fellow, Privacy by Design Ambassador by the Information & Privacy Commissioner of Canada, Forbes Technology Council Member, and recipient of multiple SC Magazine’s Reboot Leadership Awards. He sits on the board of Cyversity, a non-profit committed to advancing minorities in the field of cyber security, and is a BoSTEM Advisory Committee member.