But which metrics should we use? You might expect an “it depends” answer, but there are some metrics that are important for any application security program, regardless of audience or goals. We’ll take a look at a few of them in this post.
Metric 1: Policy Compliance
One of the problems with application security is the challenge of combining data from multiple assessment technologies and approaches—static analysis, software composition analysis, dynamic analysis, manual penetration testing—into a single picture of the application’s quality. All software has vulnerabilities, and fixing them contends with other business priorities, so setting “no vulnerabilities” as a business goal is a doomed strategy. But how can we make it easy for development teams and security personnel to make the tradeoff decisions between fixing vulnerabilities and accepting risk in a way that is consistent with the goals of the business?
One way is by defining an application security policy, a yardstick of acceptable and unacceptable risk. You can define a policy according to industry regulations and standards like PCI, or simply according to the security goals of your organization (e.g. no SQL Injection). And a policy helps to boil all those assessment findings down to a simple pass/fail.
An important thing to bear in mind about policy is setting the bar for the policy properly. We know, from studies like the Veracode State of Software Security, that all software has vulnerabilities. For instance, over 65% of the applications assessed by Veracode over an 18 month period failed the OWASP Top 10 on their first assessment. Building a policy that’s realistic for your organization is an important first step to making sure that development teams don’t get disheartened and cause the program to fail.
Metric 2: Flaw Prevalence
One way to think about how to set your policy is to think about the vulnerability landscape. Which vulnerabilities pose a particular risk to your applications and industry? Examining the data from the State of Software Security can again help.
Veracode’s data shows that not every vulnerability category is equally prevalent. For instance, SQL Injection, a very serious vulnerability that could allow an attacker to steal or destroy data through your application, is only present in about 30% of all applications on average (40% if you’re in Government). So it may make sense to start by focusing your program on eradication of a vulnerability that’s not completely pervasive but which can cause a lot of damage.
On the other hand, you may want to focus on other vulnerability categories based on industry requirements. For instance, 80% of all healthcare applications tested had cryptographic weaknesses. That’s a problem if the cryptography in the application is protecting sensitive personal health information, which is covered under HIPAA. So you may want to focus on eradicating those vulnerabilities that put you at risk of compliance violations.
Metric #3: Fix Rate
If we know what our policy is, the likelihood of passing the policy, and the risks according to the application landscape, it’s easy to despair. One of the problems of application security programs is that they focus on finding flaws. Application security companies even brag about how fast they’ll help you find that first vulnerability. But that kind of misses the plot. The real goal of an application security program is how effectively you can fix the flaws you find. And with the right help, your developers can make real progress here.
Again, looking at State of Software Security data, we see some industries achieve a very high fix rate. Manufacturing organizations, on average, are fixing almost four out of every five flaws found. So it’s definitely possible to make a real difference, and tracking and reporting the flaw fix rate (ratio of fixed flaws to discovered flaws) is a great way to report your progress. It’s even a good metric to use for encouraging a little friendly competition among development teams or business units.
Metric 4: Up to You
So if policy compliance, vulnerability distribution, and flaw fix rate are all key metrics, what else should we report? Here the answer is really up to you—and the goals of your program.
Trying to educate your developers about secure coding principles? Maybe the percentage of developers enrolled in eLearning courses, and the percentage of required courses completed. What if your goal is to shift application security “leftward,” and move away from single point-in-time assessments to integrating application security testing into your SDLC? Looking at the percentage of applications being tested via automation might be a good way to measure your progress. Ultimately your metrics dashboard should reflect the goals of your business.
By the way, you should make sure that your application security testing approach supports the metrics you need to report! Nothing kills enthusiasm faster than designing a perfect metrics regime only to discover that you’ll have to export data to Excel to report it.
The bottom line is that you can quantify risk reduction and choose metrics that both report your success and help you manage your program. Who said numbers were boring?