On Thursday, President Trump signed an executive order (EO) that instructs federal agencies to use cybersecurity best practices to further secure their IT systems. I applaud the acknowledgement that the US government “has for too long accepted antiquated and difficult–to-defend IT.” However, while the EO mentions that known (yet unmitigated) vulnerabilities represent significant cybersecurity risk, and that vendor patching is important, it misses a critical element by not addressing software flaws, specifically the ones that put the most people, systems, and critical infrastructure at risk.
Researchers estimate that software-based attacks account between 85-95% of all cybersecurity attacks. While a defense in depth strategy is always well advised, I cannot understand why cybersecurity mandates don’t focus on software as a primary defense layer. Malware takes advantage of software vulnerabilities; software manages and updates most hardware, and mobile and IoT devices are useless without software (and could never be connected to each other effectively without it.) Software runs the world, so why does it continue to be a cybersecurity afterthought?
Risk is a combination of how good your security is and an attacker’s desire to get at you. We know there are many attackers eager to take down sectors of the US government (or all of it), and we also know that software runs the IT systems and infrastructure on which it depends. The result is a massive risk profile. That said, I’d like to highlight some areas of interest within the executive order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure:“Agency heads shall show preference in their procurement for shared IT services, to the extent permitted by law, including email, cloud, and cybersecurity services.”
“We’ve got to move to the cloud and try to protect ourselves instead of fracturing our security posture,” Homeland Security Adviser Tom Bossert told reporters during a White House briefing. Commercial entities for years have leveraged built-in security features of the cloud to secure their operations and data better than they can do internally. However, in the case of the US federal government, which has historically operated in a more insular manner, is this a good move? Amazon already offers AWS GovCloud, an isolated AWS region designed to host sensitive data and regulated workloads in the cloud. This feels to me like a step in the right direction from a security perspective, but it really depends on how they use "the cloud." If the US government used cloud services for simple redundancy to protect against DDoS attacks, that could augment security by increasing availability and avoid single points of failure. Alternatively, they could use cloud services for non-core processes, centralized authentication, or similar functions, which compromises little. But again, I come back to a common mistake organizations make – assuming the cloud mitigates software security risk for applications you move there.
“The executive branch has for too long accepted antiquated and difficult–to-defend IT.”
While legacy systems weren’t necessarily built with security in mind and have had glue code mounted on them for years, it still doesn't feel like this should be of critical concern. Many new(er) software and systems also aren’t built with security in mind; and most are connected to the Internet in some manner. Couple this with the fact that a lot of today’s code is copy-and-pasted from public sources, and disaster could be looming. I’m taking a guess here, but it seems that legacy applications aren’t the reason that we’ve had major breaches over the last few years. It’s these software applications that have attack surfaces that are exponentially larger than legacy systems that seem to be the bigger problem. We should still secure older applications with an effort level commensurate with their risk, but let’s focus on the future and ensure that new(er) applications meet strict software security standards.
“Known but unmitigated vulnerabilities are among the highest cybersecurity risks faced by executive departments and agencies.”
Known vulnerabilities exist in many forms – operating systems and software that go unpatched, SQL injection due to the lack of proper input sanitization by developers, etc. Why aren’t patching software and writing secure code federal mandates? Software security is no longer just a financial or privacy issue – it is now a safety of life concern. If car windows have to be shatterproof to avoid decapitation in an accident, then why aren’t software systems, like those that manage ambulance operations, required to be secure so that malware can’t be used to shut it down as it’s driving a patient to the hospital.
“Jointly assess the scope and sufficiency of efforts to educate and train the American cybersecurity workforce of the future.”
This is a great first start, but why not call out the most needed security skills of them all – secure software development? Theoretically, if all software were secure, malware, ransomware, virus and other destructive things wouldn’t be successful. I realize this is not realistic; however, the point remains that we need to focus on the core problem – hackers target software applications, yet many organizations don’t even train their developers on basic defensive measures like avoiding the OWASP Top Ten vulnerabilities. This EO is a perfect opportunity to highlight this fact so organizations aren’t left to interpret this as “get your network staff skilled on how to properly configure a firewall.” Firewalls are great at keeping the less sophisticated hackers and script kiddies from penetrating your infrastructure, but even moderately skilled hackers can get through firewalls pretty easily.
By now, the point of this blog should be obvious, but I’ll be explicit: effective cybersecurity starts (and ends) on the developer’s desktop.
Visit Security Innovation to learn more about educational strategies to reduce risk in software development.