Last month, we launched the first of what I hope to be many Ed Talks panels. The idea for this panel started with the juxtaposition of classic engineering vs. software engineering. I was schooled as a mechanical engineer, which relies heavily on principles that were well understood for decades across all disciplines involved in the construction of whatever we were building. This was necessary because, for most engineering projects, it’s critical to maximize safety, and to do that, we must also normalize language. As I entered the software “engineering” world (yes, I will keep that in quotes for now), I was surprised at how much the principles were ignored when it comes to security.

Today’s modern tech stacks are complex, hostile, and carry unique threats that need to be mitigated; therefore, contextual skills around specific syntax, configuration, language, etc. are important. BUT many technology professionals lack an understanding of how software fundamentally functions and fails with respect to security. More problematic is that techniques around encryption, access control, authorization, and other basic principles are not universal. This made me wonder about security principles in software – are they just not valued? Were they learned and forgotten? One doesn’t operate in mechanical engineering without the concept of “a factor of safety” (akin to security in software) – it is even calculated into equations when designing system components – so why is this concept so foreign in the software world?

I took the question to three of my more knowledgeable friends: Uma Chandrasheker, Mark Merkow, and Josh Corman. These are battle-tested industry veterans that have built software security programs and teams from the ground up. They are sought-after authors, conference speakers, and experts. Plus, they are a lot of fun – though they take their jobs very seriously, they are accessible, easy to talk with, and willing to be honest about where they’ve failed in the past. I’m lucky to call them friends.

The trio offered a variety of lenses as to how principles drive software development and risk management. Here are some highlights of the 55-minute session:

  1. Principles Shminchiples – People view principles in many ways
  2. Software Security is Different - Traditional engineering principles aren’t as portable
  3. Principles help the people problem – They dumb down technical concepts
  4. We think we are shifting left – But it’s all relative to where your starting point is
  5. Defense in Depth is waning – The cloud is a big culprit
  6. Zero trust elicits mixed emotions – Don’t chain any element of trust further
  7. DevOps may resuscitate principles – Necessary for cross-functional teams

Principles Shminchiples

Call them best practices, a set of assumptions, checklists, guidance, buzzwords, theories, whatever. Josh stated they all boil down to a mental manifesto for why and how we do what we do. Specifically, principles need to:

  • Have a purpose

    Principles are values that guide us globally, subordinate to a north star, or some higher goal. It’s important for the industry to align on what should be considered throughout software development life cycles, whether it’s design, development, deployment of complex products, or networks. This would have us all working towards a specific objective.
  • Be easily understood.

    Principles need to cross people and technologies and be relatable to all stakeholders. They need to feel achievable and be simple to implement, regardless of your function.       More importantly, how does it help down the line, what does my contribution “buy”? If we can get people to understand basic things, they can put into context of their own lives to say, “ah, maybe I shouldn’t do it this way.” Once it’s a habit, it’s there for good.
  • Not be forgotten

    This ties into having a purpose and making them easily understood – make it part of the DNA of those who put their fingers to keyboard. Integrate the principles into everything we do so that anyone that touches any computer system at any level of privilege takes these ideas and brings them to life.

Software Security is different

But is this cause or effect? Software is not nearly as dependable as steel and concrete. So, while we try to apply traditional engineering techniques to digital infrastructure, it’s not as directly transferable. We have to merge our principles and values, such as complexity is the enemy of security. But it’s also the enemy of stability. It’s also the enemy of debugging. It’s also the enemy of quality. Can we find a path to that common ground for each of the stakeholders in our ecosystem?

To further complicate things, there is no single authority on how to build software right. In other engineering disciplines, there are industry-wide accepted standards as well as clear malpractice liability for not following those standards. There really aren’t any universal recipes for software security. Each company and industry are unique. Everyone has their customized stack. Everyone has their risk posture and risk appetite. Resultingly, we have to rely on things that we know are durable and work, things like the principle of least privilege, the principle of separation of duties, and adapt them for each situation.

Principles help reduce the people risk problem

As technologists, we suffer from what some call the curse of knowledge. The more we know about a particular topic, the worse off we are communicating information to those who have no idea about it because we overcomplicate it. Mark pointed out that technology does whatever it’s told to do, but people don’t. Software applications don’t ask questions, nor do they misbehave unless it’s programmed or configured incorrectly. He said, “If we didn’t have people involved in software development, we wouldn’t have 98% of the problems we do.”

This was a recurring theme of this Ed TALK – security is, first and foremost, a people problem. To Mark, the focus on principles, reinforced with hands-on practice to build situational awareness is paramount.

We think we are shifting left…

Uma had an interesting commentary. From a design perspective, she said threat modeling drives your requirements and all the things you need to know in advance from an analytical perspective before your product is actually in the field. It forces you to consider threats to a specific piece of technology in isolation and how, where, when, and by whom that technology will be deployed. All of those things matter when it comes to threat modeling, she reinforced.

From a practitioner, she said, regardless of whether you’re a security professional or in another field, it’s really important for us to understand the industry, the business, the stakeholders, and the use cases that we are ultimately deploying to. Once you do that, I think it goes beyond the confidentiality/integrity/availability triad from a security perspective.

If we want to truly shift left, we need to require security as acceptance criteria for user/feature stories. In other words, use the acceptance criteria as guardrails for how those features are going to be implemented and force thought on them from the very beginning. Instead of waiting to the very end and say, oh crap, we’ve got these kinds of problems that we should have solved through threat modeling, effective reuse of known secure architectures, etc.

Josh brought up a non-technology example to help illustrate -- imagine if you went for a blood test, and the results you received were not yours. How could that happen? Why were there no controls in place to help recognize this earlier? What if you got treated for an ailment you didn’t have (false positive) or didn’t get treatment when you’re actually ill (false negative) According to Josh, this is exactly why we need principles.

I believe that when people understand – and this is across all functions – what we’re trying to do specifically as it relates to their job function, and specifically speaking in their language, it helps wholeheartedly. Principles for me cover process, people, technology – the standard things that we would think – but primarily are done from a stakeholder perspective. In other words, we look at all these things together and develop holistic, secure, usable systems.

Defense in Depth is Cloudy

Defense in Depth is a long-standing principle in security. However, as Mark pointed out, the depth is often lacking because there’s an assumption that somebody else is “taking care of it.” When it comes to the mass migration of software applications to cloud infrastructure we’ve seen in the past 5+ years, this flawed thinking is dramatically prevalent, often leading to massive data breaches or successful denial of service attacks. As organizations embraced the cloud and moved applications to the cloud, they assumed that so much security infrastructure, controls, and protections came with that move to the cloud - and they absolutely couldn’t have been more wrong.

For effective Defense in Depth, it’s not just within a server or within or across a network. It’s in every layer of every piece of the stack. Defense in Depth in programming is vital to ensure that all of the other Defense in Depth mechanisms operate correctly. Mark commented, “I think that’s where we do a poor job of getting developers to think in those terms.”

Zero Trust elicits mixed emotions

Zero Trust is built on classic principles like minimal privilege and trust no input; however, it has an expanded focus due to the expansion of connected systems. I asked the panel if Zero Trust was something new or just the latest marketing buzzword. Josh had a visceral reaction to it (as he does with all marketing buzzwords 😀), but I have to give him credit – he nailed it in his answer:

As an umbrella term for some discrete, actionable principles, I think it could be fine. But if it obviates or obscures those discrete, actionable principles or positive patterns, I think it’s a mistake. I do think we have to use concepts of trust, trust chaining, and limiting access, but I think more often than not, this is a reason to say don’t buy that product over there, buy my product over here that helps support Zero Trust. That is marketing hype and not real security.

Mark and Uma both agree that from a layperson’s perspective, the concept is in line with the idea that we shouldn’t trust anything that anybody gives you. And the shifting of that mentality is one of the most difficult areas in implementing a cybersecurity program. Why? Mark says it’s because developers are a bit naive about the attack landscape and can become defensive when somebody says, “Hey, your application has got some problems.”

Initially, they say there’s no way that could be true. “I write perfect code every time. And why would a user do THAT anyway?” But when they see it in action, and those lights start going off, then we begin to see behavioral change. What can seem like a harmless benefit, e.g., parroting back to the user what they input in a form on the following confirmation page, can expose the entire system to a nasty cross-site scripting attack.

Josh’s opinion is that the whole notion of Zero Trust is not to chain any element of trust further. In other words, make sure that trust does not propagate through the system unwarranted. It’s not easy to do, especially if you consider today’s modern enterprise applications, where most of it isn’t coded from scratch. Rather, open-source software, codebases taken from GitHub, and 3rd-party libraries form the majority of software functionality. Just doing that, where we assemble applications today more than we code them, is an explicit statement of trust (using open-source libraries or compiled binaries.)

DevOps is Driving the Need for Security Principles

You can read debate after debate about DevOps and its impact on security. One thing that is undisputed is that the rate of change in software development the past few years has been incredible. According to Josh, the potential for improving our digital infrastructure’s overall defensibility, resilience, and security instrumentation has never been greater. He thinks a lot of security people had their attitudes wrong when DevOps first emerged. Critics said this is terrible, irresponsible, way too fast. But they didn’t see the opportunities it represented.

Josh further pontificated on this notion by summoning principles first written about by W. Edward Deming. He said you should use fewer suppliers of parts, the highest quality of parts from them, and track which parts go where so you can do a prompt and agile recall, if necessary. He claims that if we started applying traditional supply chain principles to modern software development, we would have less unplanned, unscheduled work, fewer break fixes for IT, as well as faster mean time to identify and repair problems. He took KPIs and different things that they already get measured and showed how using fewer and better open-source projects, and the least-vulnerable versions of open source, teams had demonstrably better on-time and on-budget performance more often than not.

Josh was mashing a few things together there – the principles of complexity reduction, elective attack surface reduction, faster cycle times to fix, and root cause analysis in seconds versus weeks. Those things have real business value to plural stakeholders. He believes those things can get better with an appsec or product security team that is thinking about, design objectives vs. historical controls against unstated objectives.

There’s a remarkable number of inject points throughout that entire SDLC, and old practices need to reimagining. Activities like a 2-week long penetration test or a week-long threat modeling exercise don’t fit well in DevOps, but the principles of threat consideration and validation of input sanitation are still important. Instead of doing a “scan and scold” the last week of a multi-month dev cycle, you can be doing 3rd-party vulnerability checks in the nightly build every day. Penetration tests and threat models can be done on a micro-scale, e.g., on a feature or use case during rapid DevOps cycles. And maybe the longer tests can be done in parallel on production systems to get both focused and broad coverage.


Want to check out more Ed Talks? Visit our site to watch the recordings or join us for the next Ed Talk.

New call-to-action