I’m a CEO drawn to software, but a mechanical engineer by trade. I’ve spent twenty years in the software quality space and the last fifteen focusing on security. That experience has made one thing clear: most software projects do not follow the design and security rigor as other engineer disciplines.
Traditional perimeter defenses are unable to stop most software attacks; meanwhile, hackers are focusing more on the software layer. To make matters worse, many IoT devices have no security built-in and are thus easily compromised.
Developing successful software in the 21st century requires a paradigm shift.
Just like buildings are designed to withstand hurricanes, earthquakes, and other harsh conditions, developers need to build applications that can withstand hostile environments.
Writing code, however, is never a straightforward process, even when the requirements and design have been well conceived. Handling errors, avoiding dangerous code constructs, implementing input validation, and ensuring secure communications are common tasks that can fail terribly even in the face of a good design.
In addition, developers often have preconceived notions about how to write code that do not mesh with security goals. Many tend to do things by habit, which often leads to unintended side effects, e.g., using the wrong string manipulation APIs because “it worked the last time.”
Coding Standards
All commercial development, from giant server applications written in C to Java applets and HTML, should adhere to rigid coding standards. Every developer should commit as much of the standard to memory as possible and have a copy readily available for reference whenever there is question about a specific practice. Such diligence builds a common goal and assures that as new developers arrive and experienced developers depart, the collective knowledge of best practices will not be lost.
Coding standards for security should contain safe handling of strings and integer results, methods of input validation, handling of temporary files, authentication libraries, etc. Enforcing these standards is a must by management.
Code Reviews
A code review performed for functional verification is different than a code review performed for security purposes. In fact, each type of review is so important that they should not be combined. A functional review should look at functional issues, and a separate security code review should look only for security issues.
The key objectives of the code review are:
- The design goals are being met
- The security objectives are being met
- The implementation is robust
Careful attention should be paid to the following “hot” areas:
- Hard coded secrets - Scrutinize code for embedded text “secrets” associated with variable names such as password or credit card.
- SQL Injection - Ensure that any input used by an embedded SQL query is validated and that the SQL query is parameterized.
- Information Disclosure - Look for potential exposure of undesired information through error dialogs or log files. Failure to clear secrets from memory, transmitting clear text over the network, storing clear text on disk are all sources of information disclosure.
- Cross-Site Scripting (XSS) - Look for this web vulnerability that is caused when an attacker inserts script into the application, which is then executed in the application’s security context, allowing the attacker to collect user information.
- Input/Data Validation - Look for input validation that is incomplete or that could be tampered with by an attacker (such as client-side validation in a Web application). Also, watch for authentication based only on file names, IP address or other insecure validation mechanisms.
The use of a checklist - like this one from OWASP - and a good tool that allows you to scan the code for common problems will contribute immensely toward a successful code review.
Automated Static Analysis
Think of static analysis as an automated code review where a tool processes the source or binary and lists potential problems that a human developer then manually investigates. Static analysis for security purposes is crucial because history has shown that it detects many problems that would be difficult to find in any other way. For example, security bugs are smaller in scope than functional bugs, meaning that the static analysis procedure has to look for fewer potential problems, thus reducing the number of false positives. Security bugs also tend to be more costly than functional bugs (or at least they are more visible) and thus an investment in preventing them is more easily justified.
Unit Testing for Security
Unlike more formal system tests that are conducted by a separate test organization, unit tests are carried out by the development team – often the same developer that wrote the code in the first place.
For security, many organizations have found that this is not sufficient for modules that pertain to security and that significant benefit is realized by formalizing the unit testing process. Practices like developers crosschecking each other’s code by providing unit tests to other developers, and maintaining a unit test library, are considered best practices.
At a minimum, all components that enforce security (processing untrusted inputs, parsing file formats, encrypting network communications, authenticating and authorizing users) should be diligently unit tested for robustness, reliability and security. Other places to insist on a unit testing process are hard to reach error code paths, code that processes sensitive information, and code that is network-enabled.
Defect Management
The key goal of defect management is to make sure that all identified defects are prioritized, sized and assigned frame. Security defects should be retested both from a regression point of view and with new test cases that ensure that fixes were properly made and did not break any existing functionality.
For security vulnerabilities, the following should be performed:
- Ensure that every bug is fixed in its entirety. Security bugs tend to get an immediate “knee-jerk” reaction from testers who scramble to write a bug report as soon as the defect is found. More care is required. Often, rushed bug reports contain only the tip of the iceberg when it comes to what insecure behaviors might occur. Developers should be careful to reproduce the insecure behavior and investigate all such behaviors to ensure that a reliable fix is made that mitigates all aspects of the defect.
- Investigate whether the defect could exist in similar functions elsewhere in the product. Look for the same bug in similar functionality throughout your application. Bugs tend to travel in groups; make sure you have eradicated them all.
- Retest all security fixes by re-reviewing the code, re-running scanning tools, reapplying unit and system tests, and then re-testing with new test cases to ensure that the fix is reliable and no new security bugs were introduced in the process.
Software Runs the Connected World – We Secure it
Software doesn’t exist in isolation; it operates in a complex and hostile ecosystem of technologies that makes it unpredictable – and that we understand well. For over a decade, organizations have relied on our solutions to make the use of software systems safer in the most challenging environments – whether in Web applications, Automobiles, IoT devices, or the cloud.
The attacks companies face and the problems they create are real — so are our solutions:
- Our Pen testing service uses the same techniques and specialized tools hackers do – with the same level of determination
- Our CMD+CTRL Hackathon offers actual Web sites with real web servers, traffic, browsers, databases, and operating, creating an authentic environment to hone offensive and defensive skills
- Our Computer-Based Training (CBT) courses offer real-world examples with actual code samples