Manual or Automated Application Security Testing: What's More Effective?

Posted by Aditya Kakrania on February 13, 2017 at 8:02 AM

The answer: Both…… if you want optimized coverage.

Most organizations have countless software applications they need to secure, but strict budgets and resources to do so. To ensure the right breadth and depth of test coverage, teams must align assessment efforts with application risk. Doing so effectively depends on your ability to leverage core competencies - letting humans and robots (automation) do what each does best.

Using Automated Scanning Tools

Automated security scanning tools are good at finding common vulnerabilities quickly and systematically – so they are great at finding low hanging fruit you can address immediately.  However, these automated scanning tools are prone to false positives and can't detect certain vulnerability classes. Here’s why: automation will quickly find defects that can be uncovered via pattern-matching or by supplying a large set of malicious data and monitoring the system response. These include most of the common vulnerabilities like Cross-Site Scripting (XSS) and SQL Injection (SQLi). Since patterns can vary, scanners can only flag what they've been programmed to find which are potential and known vulnerabilities. If the pattern is new or slightly different and not in it's "database" of known vulnerabilities, it is unable to detect it. Additionally, automated scanning tools cannot detect business logic defects. For example, a finance application that allows users to trade stocks will use different business logic than an e-commerce shopping application. On the other hand, a human tester can understand the business needed and alter test cases accordingly – logic that is difficult to program into a general purpose, linear driven automated tool.  

What About Manual Security Testing Techniques?

Conversely, manual security testing can focus on "hot spots" – areas that are identified during threat analysis and find business logic errors. However, it is time consuming, not easily scalable, and sometimes overkill for the application under test. Most manual security testing utilizes a combination of handpicked tools that are best suited for the application being tested. These can include automated scanning tools, customized scripts, and manually crafted data that can find defects in the application. Experts might look for patterns or other clues, but they almost always leverage specialized tools to uncover more information about the system; and rather than having a tool try to figure out what that is, a human is able to make sense of that information and take the potential exploit one step further.  

Regardless of whether you use manual or automated security testing techniques, it's important to analyze software behavior to determine whether any Confidentiality, Integrity, or Availability (CIA) principles were actually violated. Most false positives reported by automated scanners are ones where the scanner misjudged the importance of a finding within the context of that particular application. Applying generic rules to applications which deal with business-specific domains often results in a large number of false positives or even worse, false negatives. An understanding of the specific business functionality where an issue has been identified is the key to determining whether it is a valid defect or not.

Bottom line: don't get caught up in the debate of manual vs. automated testing.  Both serve their individual purposes wonderfully. Find the optimal balance for each of your applications and you'll experience the highest mitigation on investment (MOI) possible.

Webinar Leveraging Humans and Tools

Topics: developer guidance, application security, sdlc

Aditya Kakrania

Written by Aditya Kakrania

Aditya Kakrania has been working in the software security field for almost 20 years. He is responsible for delivering training, software security assessments, and vulnerability remediation guidance to Security Innovation client’s. Aditya was responsible for the vision and development of the company’s Holodeck environment-simulation product, which was the recipient of a Gartner Cool Vendor Award.