This is the fourth and final entry in a blog series that looks at each stage of an application security program’s maturity and outlines your next steps as you move toward an advanced program.
We typically see organizations fall within one of these four stages of application security:
So, what does it look like when you reach the advanced stage? Based on the experiences of our customers, as well as the experience of our senior product innovation manager, Colin Domoney, who previously managed AppSec at a global investment bank, we have pulled together the following attributes of a mature and effective AppSec program.
What the roles look like
In the advanced programs we see among our customer base, the security team governs the overall application security program, but the actual testing and fixing of security-related defects in code is owned by development. The security team in these high-performing programs is responsible for things like setting policies, tracking KPIs and providing security coaching to developers.
It makes sense that programs set up this way are seeing the most success in the age of DevOps; it’s no longer feasible for dev teams to be slowed down by long lists of flaws from the security team at the end of their development process. To move at DevOps speed, security testing needs to be fast, frequent and baked into dev processes.
To enable developers to assess security independently, the right technology is key, as well as developer coaching. And, in fact, our research reveals that programs that include remediation coaching and/or developer eLearning see vast improvements in fix rates over those that do not.
Data from our platform reveals that organizations that employ remediation coaching services, which work with developers to prioritize and fix security-related defects, see a 1.45x improvement in flaw density reduction from their efforts. And those who arm their developers with eLearning opportunities are logging a whopping 6x improvement in flaw density reduction through their remediation practices.
What the technology looks like
These advanced programs have implemented multiple testing techniques that cover each and every application throughout its lifecycle – from inception to production.
Our most recent State of Software Security (SoSS) report provides supporting data to the idea that multiple testing techniques are more effective than a single technology. SoSS version 7 shows statistically that there are significant differences in the types of vulnerabilities that are discovered by looking at applications dynamically at runtime, as compared to static tests in a non-runtime environment.
When we looked at the top five vulnerabilities unearthed by dynamic scanning, one was not present at all in the list of vulnerabilities uncovered by static (deployment configuration).
In addition, 25 percent of applications tested dynamically had Cross-Site Scripting vulnerabilities, versus 52 percent of applications that were tested statically.
The major takeaway here is that neither type of test is necessarily better than the other, they’re just different. As such, it is important to remember that no single testing mechanism is going to solve all of your application security problems. It takes a balanced approach to properly evaluate and mitigate risks.
The most successful AppSec programs not only use a mix of static, dynamic and manual testing, but also assess every application, not only those developed internally, but also those purchased from third parties, and assembled with open source components. In addition, they assess every application throughout its lifecycle – from development to QA and production.
One particular technology our best-performing customers’ programs feature (and that is a key part of the “roles” discussion above) is a solution that allows developers to test early and frequently, but in private, without affecting security policies, such as Veracode Static Analysis IDE Scan and Veracode Developer Sandbox. Our most recent SoSS report looked at the fix rate of organizations that don’t use Developer Sandbox versus those that do. Overall, developers who test unofficially using Developer Sandbox scanning improve policy-based vulnerability fix rates by about 2x.
What the measurement looks like
We find time and again that AppSec success hinges on metrics – both to prove the value of the program and get buy-in from others, and to hone in on areas where the program needs to be tweaked and improved. For instance, your program will benefit greatly from the answers to questions like, is your fix rate improving over time? What’s your rate compared to your peers? Has your flaw density improved, and how do you rank against others with this metric? What types of vulnerabilities are you seeing, and is that in line with others in your industry?
Advanced application security programs measure results through a set of metrics and key performance indicators (KPIs), such as compliance, flaw prevalence, fix rates, industry standards and business- and goal-specific performance.
Take your program to the next step
Where are you in your AppSec journey? For more details on effective and advanced application security programs, and how to get there, check out our new guide, Ad Hoc to Advanced Application Security: Your Path to a Mature AppSec Program.