A couple weeks ago I presented a webcast that covered techniques for testing mobile applications. As usual I was long winded with stories and analogies and went over time. I tried to answer as many questions as possible, but we had to cut the webcast off at ten minutes after the hour. As I closed out the webcast I mentioned that there were dozens of great questions that I wanted to answer, but didn’t have time for. There were tons of great questions and just like in the webcast I have been thorough if a bit long winded. On the plus side I plan on releasing answers to all of the questions asked, and we’ll release these in a multi-blog series.
I’ve copied the questions here and answered them as thoroughly as possible. I appreciate your questions and strive to be the fountain of knowledge that you expect, so I hope you’ll keep them coming for my next webcasts!
Zak Dehlawi, Security Innovation’s resident Android Guru, answered most of the Android questions. So when you see an answer about android, it’s his expertise we’re tapping into. Thanks Zak!
One small note, I’ve taken each of these questions verbatim, but dropped the names. The questions are yours, the answers are mine :)
Q: What do you think about SDLC? Do tools exist to do this?
A: I think integrating security into each phase of the SDLC (Software Development LifeCycle) is a huge step forward to creating secure applications. Getting the security conversation started early and continuing it often is very important.
The Microsoft SDL (Secure Development Lifecycle also called the SSDLC or Secure Software Development Lifecycle) and many others define security “gates” between the phases of development, which are also very helpful. You can think of these gates as checks and balances between each phase. They help the business people talk to their end users about security, help get those requests into requirements and then translate those requirements into secure specifications by the architect. At each transition there is process in place to make sure things are done properly and effectively. A gate for developers might be something like a security code review before check-in. For testers it might be a security test pass before sprint sign-off.
The SDLC is more of a process, so it doesn’t lend itself to a single tool particularly. However Microsoft has released SDL templates for both Agile and waterfall processes that snap into VSTS and TFS. They’ve also released the Microsoft Threat Modeling Tool which can help kick your SDL off to a great start.
While you can use the MS SDL even if you’re not a Microsoft shop you may be looking for other standards. If you’re interested in those I’d suggest checking out the OpenSAMM (Open Software Assurance Maturity Model) project, which is an OWASP subproject. OpenSAMM does a great job of defining gates inside and outside of the actual coding phases including touching on Governance, Construction, Verification and Deployment phases. This is nice and lightweight and covers a great deal of touch points for development shops.
If you’re still looking for something else check out CLASP (Comprehensive, Lightweight Application Security Process)
Security Innovation offers SDLC Gap analysis as a service, which is why I know so much about each of these things. When we perform a Gap Analysis process for our customers we first look at their current process, then discuss their goals for security. Once we understand where they are and where they want to be we will help create a roadmap using many of the principles in the MS SDL, the CLASP and the OpenSAMM processes to custom tailor a solution that matches our client better than any other standard could. We then help highlight different security gates through internal and external process, education, tooling and standards to make sure the process is abided by properly. The gap analysis can take quite some time as our clients become more mature in their process, so we build in touch points over months or years with our customers as they implement their secure process throughout.
If you have any further questions please contact me!
Q: Is buffer overflow possible for Android apps?
A: Buffer overflow vulnerabilities in the classic sense are not possible in managed code. Android apps are written in Java and executed by the Dalvik Virtual Machine, which performs array bounds checking, any attempts to exceed the array bounds result in ArrayIndexOutOfBoundsException. However, as the Android OS is based on Linux, a number of low level programs written in C/C++, including the VM, could be vulnerable to buffer overflow attacks.
These classic vulnerabilities can be found in a number of places, and aren’t always patched as wireless carriers drop support for their phones. In fact rooting your Android phone usually involves either a buffer overflow attack or resource exhaustion attack depending on the phone and OS version. Newer versions of the Android OS have taken steps to mitigate these threats using Address Space Layout Randomization (ASLR) and hardware based No Execute (NX) to prevent code execution on the stack and heap.
Q: You mentioned that applications should check for certificates component. How can we modify certificate parameter for testing?
A: During the talk I mentioned that every certificate should be validated on the device as much as possible. Depending on your needs for flexibility and security you may choose validate the certificate using the certificate’s attributes and the certificate chain of trust up to the CA root. If you are certain the Certificate will not change then you can also hardcode the fingerprint of the certificate into the application. This reduces flexibility because if the certificate ever has to change for any reason you’ll have to send out an update to your application. On the other hand it can help if you’re concerned with Certificate Authorities getting hacked and an attacker generating invalid certificates.
When we test the certificate checking on the device we will first attempt to redirect traffic through Burp without generating our own certificate. This will test the classic Man in the Middle attack since the Portswigger CA is not trusted on the device by default. This should fail every time. If it doesn’t that means the application isn’t checking the certificate’s CA chain or may only be checking that they are communicating over SSL, not the certificate at all.
If it passes that test then we install the Portswigger CA on the device and try again. If this works then we know the application does not have the fingerprint of the cert hard coded and it’s trusting the CA store on the device. If this doesn’t work then we have a bit more work to do and will investigate why the cert is failing.
Next we will modify the certificate’s attributes one by one. Will the app accept a certificate that is expired? What about one that is not yet valid? What about one which doesn’t match the common name? We will invalidate each attribute and check how the app responds.
Q: How many Security Departments are using Burp to analyze interceptions
A: Many! There are other HTTP proxies out there (WebScarab, Paros, Fiddler, etc.) that are all good, for different values of good, but Burp seems to be the industry standard. Burp is relatively inexpensive, easy to use and full featured. Sometimes we’ve found it falls short for some high volume or very complex tests, but it does a good enough job that I use it whenever I use a proxy. For those highly complex or high volume tests we usually develop our own tools in python. For the high volume tests we love using one of our Principle Security Engineer’s asynchronous python library, Black Mamba; thanks Marcus!
Q: great presentation!
A: Thanks! Tune in next time!