Product security incident response teams (PSIRTs) are teams of security professionals that work diligently behind the scenes to protect software products and services of companies. A PSIRT is a different breed than a computer security incident response team (CSIRT), those that tend to be called Information Security. The difference is simple but stark: a CSIRT focuses on responding to incidents that affect a company's infrastructure, data or users. A PSIRT focuses on responding to incidents that affect products a company builds, the most common being the discovery of a vulnerability or security defect, and subsequent actions to manage or remediate.
Tools for PSIRT
I've been a part of a PSIRT for over 20 years, first as the leader of Mandriva's PSIRT (although we didn't call it that then) and currently for Red Hat. While its changing somewhat today, there were never that many tools for a PSIRT to use, compared to the plethora of tools available to CSIRTs. Sure we have static code analysis (SCA), static application security testing (SAST), and dynamic application security testing (DAST) tools to identify known and unknown vulnerabilities in our products. But there was never a great way to manage the data around those vulnerabilities so most PSIRTs rely on homegrown tooling or piggyback things onto existing tools that weren't meant for that use.
For example, when I started at Red Hat nearly 14 years ago, I used the Bugzilla instance directly to track and file bugs and vulnerability information. Back then it was quite simple – there was a CVE bug which contained the details of a vulnerability and then created children bugs for product teams to track to remediation. This worked when we only had to worry about OpenShift and the JBoss Enterprise Application Platform. As we began to develop and support more products, we found doing this manually didn't scale with such a small team. We wrote a series of scripts in Python, fondly referred to as security flaw manager (SFM), that manipulated Bugzilla through API calls to create bugs, add comments, and other metadata. This occurred when a flaw was reported, made public, impact and scoring metrics, what products were affected, and other useful data. None of which were properly supported by the bug tracking systems, and were instead stuffed in other fields in custom formats, prone to human meddling. While rudimentary, these scripts did what we needed them to do, for a time. But as we wanted to collect more metadata, and had increasingly more products to support, SFM felt a little long in the tooth. After all, who wants to do all of this work on the command line?
A number of years ago we endeavored to create a new tool. We developed SFM2, which was a single web-based application that did what SFM did and more. It had better search, which helped with the ever-growing number of CVEs we had to track and deal with. It provided better checks for quality, ensuring we didn't miss anything as dealing with more vulnerabilities and more products became ever more complicated. We knew this was something that other PSIRTs might be interested in and for some time we held out hope to modularize it and make it open and available. But it was still bound to specific Bugzilla customizations. This made it difficult or impossible for anyone (other than us) to use it.
The evolution of SFM2
This was quite frustrating as we had effectively developed innersource, a term coined by Tim O'Reilly over two decades ago. Everything was written with open languages and built in an open source way. But we couldn't share it and no one could benefit from our work, nor could we benefit from others experience and input. We knew there were other companies out there dealing with more complexity in their products and, now, managed or hosted services. As a leader in managing open source vulnerabilities, our team had some excellent tooling we couldn't share with anyone due to how we had inadvertently allowed feature creep and ties to custom tooling to get in the way.
So last year we took a look again at the problem. SFM2 was not designed in a way that allowed us to maintain it well, and there were some other deficiencies that we needed to correct — but we had hit a wall. We needed different capabilities, and the tooling was designed for a very specific way of working that needed to change for efficiency and scale. And using Bugzilla as a backend database, which worked well enough a decade ago, was no longer ideal. In fact it was the single biggest hindrance we had.
What we needed was not a monolithic application but a set of smaller services that worked well together using APIs. The way I explained it when we were conceptualizing this a year ago was the difference between the sendmail and qmail email servers. Sendmail was a single monolithic application that did everything, whereas qmail had different services where output from one was passed as input to another, and each service was distinct and unique enough to make it easier to maintain. This was after all, a key part of the original UNIX philosophy, something that many of us who've been doing this for quite a while, still hold in high esteem.
As a result, we set out to build four primary applications: a flaw database that would store all of the vulnerability information (replacing Bugzilla as our backend), a frontend to that database to make it easy to add and update information, a registry of components that could be used as a manifest of all our products and services so we could easily find where any given component might live, and finally a license scanner to ensure we met our open source license compliance requirements. One of the core design principles was to have the primary method of interaction be via APIs such that we could write a frontend that no one was obligated to use (if an end-user was authorized, they could recreate the SFM scripts of yore to interact with the flaw information via the command line). But more importantly, the services could be integrated with other existing tooling directly, using standardized and open data interchange formats, rather than manual duplication of metadata from one platform to another.
Further, another core principle that had to be adhered to was that these tools needed to be developed in the open. We did this for a few reasons. One, we wanted others to be able to use and contribute to these tools. Second, it enforced a certain amount of rigor — we couldn't design these tools for our own use exclusively, so no more innersource.
With the experience and lessons learned moving through not just one but two generations of tooling to support open source vulnerability management, we're pretty sure we chose the right path forward. Yet, we're humble enough to know that others may have different needs and hence the invitation to join us to develop these tools. Nearly everyone, from large enterprise open source producers, to the pizza shop down the street with their web and mobile applications, are software developers. So there's a need for tools to manage vulnerabilities beyond homegrown ones, spreadsheets, hacked up add-ons to software or services not designed to handle a PSIRT process. There are a lot of tools for CSIRTs and developers, but not that many tools for incident response and coordination.
If you're interested in looking at or using any of these tools, we invite you to collaborate with us through GitHub. While we have been working on these for a while, we have only worked on three of the four tools to-date. The fourth, the frontend to the flaw database, or the service layer that operates between these services, is yet to be started.
Component Registry: which is used to store all of the component information across any number of products and services
OSIDB: the Open Security Issue Database, is the database to store all vulnerability data
OpenLCS: the Open License and Crypto Scanner, is the tool to obtain license and cryptography information from shipped components
Comments are closed.