If you’re developing software, you need quality assurance (QA) and you need a way to manage vulnerability discovery. These are two distinctly different streams. QA focuses on bugs or problems which cause things to break or prevent functionality from behaving the way it’s meant to. The QA process tests the software to ensure all features are working properly. You’re following a defined requirements list, making it a bounded exercise. This isn’t to say the QA process is easy. The culprit could lie in one tiny line of code among millions, but you’re looking for something specific.
A vulnerability, on the other hand, is a weakness or flaw which could be exploited for nefarious purposes. It’s not a bug. A vulnerability can exist in code that works flawlessly from a functionality perspective. If security is explicitly baked into requirements, it may be actively tested in QA. But even with strong, embedded—and tested—security controls, you still need a separate application vulnerability discovery program. Cyber criminals will seek any weakness to exploit. And they’re actively looking for areas of exposure you haven’t (yet) considered and addressed. That’s why identifying and remediating vulnerabilities is not about software quality—but about managing risk.
Why is this so hard?
Software’s complexity makes it hard to identify and prioritize vulnerabilities under the best of circumstances. But the pressure to accelerate release time frames creates a moving target of potential exposure in continually evolving applications and infrastructures. Most companies have a range of testing tools that provide information about specific vulnerabilities. But this visibility is fragmented and doesn’t allow for a comprehensive, let alone real-time, view of your overall risk. And while security technology is critical, it’s not enough. You also need the right processes, knowledge and culture to create and execute an effective program.
Here are some critical building blocks you need.
Organizations looking to build and scale effective application security programs must create a foundation that includes five key practices.
- Prioritization. Not every issue flagged by vulnerability scanning tools needs to be addressed with the same rigor. There are a variety of reasons for this, but the key thing is to be able to determine which issues will impact your organization most.
- Normalization. Disparate tools generate different information about vulnerabilities. Some use industry-standard frameworks, which helps, but this isn’t always the case. To make sense of your vulnerability data from an overall risk perspective, you need to use the same scorecard for all your tools.
- Consolidation of Views. Most security testing tools are used within a particular stage of the SDLC. This is important for discovering vulnerabilities in the development pipeline, but it creates stage-specific silos. To get a holistic view of risk across testing results, you must be able to consolidate a single view of the software value stream.
- Consolidation of Effort. Once you have a consolidated view, you need to translate the information into an action plan. A particular vulnerability will often manifest in multiple instances. You need to report “units of work,” consolidating issues within a certain scope, like the same repository or server type.
- Accountability and ownership. The teams who created the pipelines and codebases with identified vulnerabilities should do the remediation work. This builds a security-minded and risk-aware culture throughout the enterprise. To do this, you need to identify who is responsible for known vulnerabilities.
When you apply these practices across your application portfolio and infrastructure, you strengthen your vulnerability discovery program. And you vastly improve your ability to address the real risks your business is facing.
For more information on these critical questions and how risk-based vulnerability orchestration can help, be sure to check our white paper, “Making Application & Infrastructure Vulnerability Management More Effective.”