For the past 18 months, I've had the pleasure of hosting dozens of technology and cybersecurity experts on Ed TALKS, a moderated discussion about today's security strategies. This Ed TALK featured executives from the three principal stakeholders of product security - product management, engineering, and security. While they all worked in different industries and took varying security approaches, each had the same corporate charter: get feature-rich solutions to the market quickly with the right level of security built-in.
First, the experts:
- Claudia Dent - SVP Product Management, Everbridge
- Mark Nesline - Chief Engineering Officer, Imprivata
- Trupti Shiralkar - Product Security Engineering Manager, Datadog
The guests continually enforced three themes: the importance of supply chain risk, the need for security knowledge/skills across all groups, and how security can and should be an enabler of today's modern rapid-release enterprise. Here's how each arrived at those conclusions via three distinct paths and perspectives.
Kids in the candy store
DevOps is all about continued feedback and speed, which is difficult to execute effectively without automation. There are also far more cloud services available than one could ever need for a single project. Yet, most engineers are seduced by shiny tech toys and love to tinker (as do I, a former mechanical engineer myself). Subsequently, folks like Mark, Claudia, and Trupti keep them constrained.
Watch Now: Securing Microservices in Today’s Fast, Feature-Driven SDLC
Watch this Ed TALKS to hear how industry experts from product security, engineering, and product management integrate risk-based approaches to their software pipeline to release software more confidently.
Mark (engineering):
I'll double down on the necessity of automation. The complexity of microservices architecture and the need to understand exactly what's going on at any point requires a bottom-up selection of tools. So, Developers, DevOps, and Product Management have to be in the loop because product management often has to buy into this being a part of the overall infrastructure and critical to accelerated releases. However, it also gets back into requirements, operation architecture, and design. If you don't do that work up front, your automation will be wasted.
Mark's team uses tools like Lacework for visibility into cloud and container security and WhiteSource for software composition analysis and source inventory. This visibility is critical because you need to know where all of your software components came from. Today's applications aren't coded anymore – they are assembled from 3rd-party GitHub libraries, open-source software, commercial off-the-shelf products, and the like. Understanding the composition and licensing implications of what your team is using is critical to security and compliance.
Microservices & APIs
APIs have been around for a long time. However, if you're not consuming those APIs securely, you could get into trouble. And if you're building APIs, you've got to treat them as you would any other software –understanding threats, choosing secure design elements, implementing secure code and controls. Trupti offered some insight into her approach to working with development teams as a product security pro.
Trupti (product security):
Microservices and cloud APIs provide developers the flexibility and speed to quickly design, build and deploy in the cloud. But some of the challenges that I have personally seen are: How do we trust these APIs? And I don't mean zero trust, the marketing buzzword, but the real concern. I am handing over sensitive data to these cloud APIs, and what kind of assurance mechanism or security controls do they have to handle this data? Integrating APIs securely into your codebase is critical. That's a clear 'shift left' message, but this isn't the biggest problem. The big problem occurs to the right.Before the Capital One breach, AWS metadata service was available without authentication. After the breach, AWS ended up developing a switch where customers have the flexibility to achieve a higher level of security. But if they do not turn on the switch, they're still in a quite insecure state. The number one challenge is around trust and not having secure by default options available to developers. They have to really pay attention! In my personal experience, over 40% of vulnerabilities in the cloud security area are around misconfiguration. If we can educate developers by providing them security knowledge-based guidance around these vulnerabilities, if we can tell them how to consume these services securely and what are the abused and misused cases that can happen when they're in a hurry to deploy something, that would be hugely valuable.
Trupti's data point here should not be missed: 40% of the vulnerabilities she has encountered in DevOps/cloud deployment work have nothing to do with writing secure code. This ties into a Gartner estimate that by 2025, 99% of cloud security issues will be the customer's (not cloud providers) fault.
Managing Product Risk
Engineering and Product Management leaders like Mark and Claudia manage product risk with tactics including solid security architecture and well-documented compliance requirements.
Claudia notes that if you have rigorous security accreditations like a FedRAMP, you can't increase your surface area very easily. Much of that rogue development goes away because you have processes in place that stop that because you're just not allowed to do that - and the processes have to be baked in. They're monitored by your follow-on accreditation, which happens every year. But software is never 100% secure, and Claudia offered insight into how to determine how bad is "bad" when vulnerabilities surface.
Claudia (product):
The worst nightmare a product manager can imagine is a public data breach. So the security team approaches product management and says, 'We have a SQL injection.' Well, we know that's a bad word, right? But you can't just go with that say, "Okay. We immediately have to do a hotfix. We have to interrupt our whole development cycle." You have to be able to ask some intelligent questions, and you have to get an understanding of how your security team operates. Some people are very conservative and want to fix everything right away. Others are more lenient. And those can have positive and negative impacts on your software. So you have to understand not only the technical side of things, but the people side of things as well. The questions you want to ask are:
- What data is exposed? Is it a complete set of personal information, or is it a fragment? How is it exposed? Is it exposed through the user interface or is it exposed deeper in the chain of microservices?
- Are there other mitigating things that you can do, a configuration option as an example, to close that up so you're not having to write code?
- What's the use case where that data is exposed? Is it a use case that's used all the time in the product? Or is it something that's used less frequently?
- How many customers are impacted? Maybe it's something that's used all the time by a lot of customers, and it's exposing the user interface. Well, that's the extreme of time for quick action to be taken. Maybe it's something that's buried a little more, the data isn't as sensitive, and it's used by few customers and it's not a frequently used use case. Well, there are other things you can do.
- Can you turn that function off behind the scenes? Can you let that handful of customers know, "Don't use this for a month"?
You want to be able to explore all of these different options so that you don't disrupt your development cycle unnecessarily for something that could be mitigated in other ways. It's always around managing risk and doing the right thing for customers and the data involved. It's a lot of real-time threat modeling and collaboration with engineering and security.
Managing product risk has gotten easier the past few years as security becomes another aspect of software quality. This was certainly not true when I started in this industry nearly 20 years ago; in fact, it still isn't true for many organizations today! I was glad to hear that for Everbridge, Datadog, and Imprivata at least, security bugs are now being triaged daily alongside functional, performance, and other quality issues in their DevOps pipeline. For those who aren't there yet, I strongly encourage you to map your security/vulnerability severity ratings with engineering defect/bug severity ratings. That will bridge the gap and normalize language across the teams.
Supply Chain Risk
Much was discussed around cloud microservices and today's software being more assembled from third-party "stuff" versus coded from scratch.
The implication? Supply chain risk.
The SolarWinds hack in 2020 freaked out a lot of people as attackers were able to infiltrate the SolarWinds build process. The SolarWinds hack just toppled an already spilling bucket of customer requirements concerning supply chain risk for all three experts. Those requirements cascade from the purchase of a solution through that vendor's supply chain. Software vendors are getting a lot of incoming requirements on supply chain attestation and documentation that they wouldn't have gotten two or three years ago.
Mark emphasized that his team had good visibility into their software and security properties around it. But they went back and did a "re-look" to test their assumptions. They re-created old threat models for each of their products and changed the assumptions. Previously, they believed their developer's environment was inherently secure because they were all running on the corporate desktop. They reset some of the requirements on internal developer desktops – they not only need to be secure, but they also need to be efficient because you can't have a secured desktop for a developer that's too slow to work on.
For Everbridge, they had to re-examine not just the code they wrote but all the microservices, 3rd-party libraries, and APIs they were using. Then they took their analysis a few steps further. They no longer were satisfied with the security of their software and software platforms, and they examined the physical security of their environment as well.
Who can get in the door? What kind of measures do we have from a physical security perspective What about other operational pieces? What about human resources data? Or customer data that's held in things like Salesforce? How well is it all segmented, secured independently, and part of the whole system (communication flows between components)? They now look at security as a broad spectrum, spanning the cyber-safety worlds – not just the software they sell.
Tools and Teams in Minimizing Security Risks
As a former game developer turned product and security professional, Trupti offered some keen insight into best leveraging tools.
Trupti (product security):
I would recommend building homegrown, customized scanners suitable to your tech stack. Your tech stack is getting integrated with cloud services, and no commercial tool is intelligent enough to understand the context. As a product manager, I spent a lot of time trying to come up to speed on technical topics so I could relate to my engineering teams better. Product owners/managers can sometimes struggle with the technical depth of security scans and assessment results.
Claudia has much more experience as a product leader than I do, and her take was interesting. She recommends staying out of initial security defect triage conversations – letting the dev and security teams do their job and bring vetted product bugs to the conversation. She pointed out the noise that all static analysis tools produce. That can't be a morass that product leaders get pulled into.
Once that list is narrowed down considerably, one can further discuss: What is the risk? What data is exposed? Et cetera. She says having that layered approach is important to ensure that you're focusing on the right things. Product managers are already deluged with RFIs and all other kinds of things, so her motto with product management is, less is more. Be focused.
For that development team to do that triage, though, you need a capable security liaison to help them through it. Development teams sometimes don't understand the language used in the source code scanning tools, and the tools are all different. There's a lot to unpack in the results of those tools. That's where security pros can help.
Trupti brought up another fascinating data point. She talked about an interesting study from 2019 where it turned out static code analysis tools only provide 17% of application security coverage – and uncovered mostly low-hanging fruit. If there's a compensating control already in place for one vulnerability class or another, the overall rate goes down even further. She encourages her dev teams to build their own tools, which yield less than a 5% false-positive rate.
Developing Security-Minded Software Teams
Mark pointed out that moving from software engineering into security would be a prudent and fruitful career move for some. He said, "You need somebody who's going to sort of focus on that area of the whole process – and people are excited about it." Not to mention, it pays pretty well.
What Mark and the team did at Imprivata, deliberately or not, was to create a security champions program. Some call it embedded security engineers; Gartner calls it security coaches. Imprivata recruited some developers, QA staff, and others to take a security leadership role for their respective product teams. It's a great strategy instead of hiring security-only people, who are hard to find these days and very expensive if you do.
Imprivata, Datadog, and Everbridge all use partners to augment their staff. Some do it for activities they don't have "the everyday talent" for, like security architecture reviews or incident response planning. The CSP (cloud service provider) and DevOps platform itself can be good partners in some cases. For Mark and Imprivata, this includes AWS and GitHub. Both provide plenty of tools to help and paid "white glove" support/service.
Trupti emphasized building security into one's development process and staff. She looks at the different job roles on her dev, PM, and security teams – and based on each job family, creates customized training to help them do their job better. The critical point, she calls out, is that each group has distinct training needs (in her case, for seven to ten different job families).
She said that product managers don't need to know how to interpret secure coding guidelines or vulnerabilities. Still, it would be nice if they could learn how to translate business requirements into security requirements. Then technical project managers and engineering managers could be trained on risk scoring and how to translate security risk into an engineering severity or priority. She says these are nuances, but we have to pay attention and make everyone included because at the end of the day, security is a shared responsibility.
Going a little deeper, she suggests not teaching your developers just about secure coding guidelines but about attacks and hacking techniques. This reduces the psychological space between defenders and attackers, and teams build situational awareness. As a result, they start paying more attention to what they're building and designing.