PHDays 9: new methods of Vulnerability Prioritization in Vulnerability Management products
On May 21, I spoke at the PHDays 9 conference. I talked about new methods of Vulnerability Prioritization in the products of Vulnerability Management vendors.
During my 15 minutes time slot I defined the problems that this new technology has to solve, showed why these problems could NOT be solved using existing frameworks (CVSS), described what we currently have on the market and, as usual, criticized VM vendors and theirs solutions a little bit.
Here is the video of my presentation in Russian:
And this one with simultaneous translation into English. Unfortunately, I didn’t know that there would be any translation, so I spoke quickly (natural way for fast track) and used specific slang. That’s hardened the task of the translator significantly. That’s why I can’t really advice to watch this translated video, it might be better to read the full write-up bellow. By the way, I was posting parts of the write-up in my Telegram channel avleonovcom in real time. Currently Telegram is the main blogging platform for me, so I invite you to follow me there.
Presentation slides:
PHDays 9: new methods of Vulnerability Prioritization in Vulnerability Management products from Alexander Leonov
Truly revolutionary year for VM
I think that 2019 is the best and truly revolutionary year for the whole Vulnerability Management industry, since top VM vendors (well, at least 2 of them) finally publicly recognized the problem with Vulnerability Prioritization and began to offer some solutions.
The problem is that most of vulnerabilities that can be detected by a Vulnerability Scanner are actually unexploitable and worthless for an attacker. And it’s hard to say which of them exactly. These can be vulnerabilities labeled as “Critical”, “High” level or with “Exploit exists”.
And you still have to fix such unexploitable vulnerabilities and face negative reactions from IT because of unnecessarily remediation efforts, down time, and “The Boy Who Cried Wolf” effect.
Certainly, it’s not a secret for those who have ever launched a vulnerability scan, but this state of the things was here for decades (Tenable/Nessus, Qualys, Rapid7 are more than 20 years old!). “We give you information about vulnerabilities that we received from software vendors as is, and it’s up to you how to make this data actionable”. This always drove me crazy, and I am glad that finally some vendors started to talk publicly that it’s not ok (of course, with their own marketing reasons).
Invisible work of keeping Vulnerability Databases up to date
It may seem that I criticize Vulnerability Management vendors, because they don’t validate exploitability of the vulnerabilities. Not really. I know that they have many other important things to do. Especially the teams responsible for keeping Vulnerability Databases up to date.
At nearly every VM vendor’s website you will find news about vulnerabilities, that were discovered by their vulnerability researchers. From marketing point of view such activity seems valuable. Vulnerability researchers promote vendor at security events and spread the message: if the researchers are competent, then Vulnerability Management product should be good as well.
But in reality Vulnerability Research has nearly nothing to do with main functionality of Vulnerability Scanners. How many vulnerabilities in software products can a good research team discover in several years? In best case maybe hundreds. For a relatively good Vulnerability Management solution it will be necessary to make vulnerability detection rules (plugins) for more than 100000 existing vulnerabilities! Making a good Vulnerability Management solution is not about the research of new individual vulnerabilities, it’s about effective aggregation and processing of poorly structured external data.
CEO & Founder, Author of Risk Centric Threat Modeling & PASTA Methodology, Public Speaker, Advisor, Investor
5yProblem statements depicted on the slide are spot on. Before we even get to automation considerations, most #infosec pros, #CISOs need to realize that most CVEs don't have associated exploit code. Incredibly, risk prioritization at multiple organizations, even well before automation trends today, are being done soley on CVSS scores which is always baffling considering that those scoring models *don't* have contextual pieces of information related to deployment model, architecture, data types and use and other pieces of information. This throws the risk messaging way off to risk owners, execs, and board members that need true risk analysis on current exposure levels and not simply technical risk ratings. CVSS is useful, but needs to be considered with other factors (e.g. - is the exploit code publicly available, context of data use, architecture, access models, etc.)
Results-driven leader @Amazon/Meta/multiple startups. 25 years launching Cybersecurity innovations zero-one, and scaling 8-figure revenue w/data-driven impact.
5yAnyone who has been using these for 20 years knows that vulnerability detection (what these vendors do) is often simple (not high precision) and very low recall (false negatives are better than false positives, for the vendor). Precision/accuracy, and exploitability, are both separate steps one must take in prioritization. Scoring systems like CVSSv3 exist for this reason, though I don't know anyone who has ever used CVSSv[anything] natively for scoring / prioritization.
I have only experience in one company doing VM...sure wish I could see how it works in another company
Serkan Özkan
5yIn my opinion, relying on information (priority, score, description etc) provided by a single source/tool is the main problem. It seems like too much work but everyone should review, assess and prioritize issues themselves which requires easy access to high quality information from multiple sources. This is the bitter truth.