Vulnerability Prioritization from the Perspective of a Test Vendor
As a trusted network security test tool vendor, we have the obligation to make sure the exploits we are including to our attack emulation capabilities are as relevant and current as possible. Internally, the Application and Threat Intelligence (ATI) team must determine which CVE’s will be turned into Strikes and put into the BreakingPoint ATI bi-weekly release.
This is not as simple as selecting the current year’s CVSS Critical vulnerabilities. There is far more logic applied to this calculation. While manual exceptions can be made, the vulnerability selection is using an algorithmic funnel. This is a high-level exploration of the primary factors that go into this weighting system.
What are CVEs?
Common Vulnerabilities and Exposures (CVEs) are a list of publicly disclosed security vulnerabilities. Managed by the MITRE Corporation, CVEs provide a standardized method for identifying vulnerabilities and their corresponding exposures. Each CVE is assigned a unique identifier, with the format of CVE-YEAR-NUMBER.
Once we have a CVE, next we must investigate what the numerous factors are that can affect the severity of a CVE.
CVSS
One of the most common metrics used to quantify the severity of a vulnerability is the Common Vulnerability Scoring System (CVSS). The first version of CVSS was released back in 2004 and since then has undergone many changes. The most used version is CVSS 3.1, composed of three metric groups: Base, Temporal and Environmental. The Base Metric Group represents the characteristics of the vulnerability which should not change with time and environment. Temporal metrics represent the characteristics which may change over time such as availability of a PoC or a Patch. Finally, the Environmental Metric Group represents the characteristics which depend on the vulnerable environment.
All these together give us the final CVSS score along with a vector string to represent the values for the various metrics.
Fig 1: Metrics of CVSS [1]
Source: https://www.first.org/cvss/v3.1/specification-document
Just using CVSS score to determine if a vulnerability should be covered or not is also not suitable as more than 57% of the Vulnerabilities fall under High and Critical, as we can see from the data for the past few years. Without additional context covering all the High and Critical Vulnerabilities is not feasible.
Fig 2: On Average 57% of the total vulnerabilities are either High or Critical
Now let’s look at how we can provide more context to CVSS score.
EPSS
The Exploit Prediction Scoring System (EPSS) is a data driven model which estimates the likelihood of a vulnerability being exploited in the wild. One way to use EPSS and CVSS to triage vulnerabilities is to prioritize the high EPSS Scores which have High or Critical CVSS rating, as can be seen in the diagram. Source: https://www.first.org/epss/user-guide
Fig 3: EPSS and CVSS together
Using EPSS allows us to provide some needed context. However, the model itself is a black box, we do know which parameters are ranked higher, but we do not know the complete path for a vulnerability as to how it can get a specific score. Hence, we can start to look at this problem from another logical angle, from scratch.
What factors to consider?
If we are to start fresh with a mind map to show all the possible factors that could affect our choice of picking a vulnerability to cover, we would get one as following:
Fig 4: Mind Map showing the various factors affecting Vulnerability Prioritization
Let's look at a few of the factors which could affect how a vulnerability is prioritized. We can do so by trying to look into the fundamental characteristics of vulnerabilities.
“Not all vulnerabilities are the same”
else we would not need to have this blog and spend countless research hours into this topic.
“Not all vulnerabilities are exploited or will be exploited.”
Given the vast space of vulnerabilities only a certain small portion of them are ever exploited by Threat Actors or otherwise. Hence, when we do find any signs that a vulnerability is being exploited or if there is a high likelihood of it being exploited then those vulnerabilities should be ranked higher. We can find that information using various input signals like honeypots, CISA Known Exploited Vulnerability Catalog etc. For the ones that are likely to be exploited next, it highly depends on the amount of information that exists for a vulnerability in the public domain. If there exists a full working Proof of Concept then the likelyhood that vulnerability will be exploited is high. Similarly, if the details about the vulnerability are publicly known as compared to private information it increases the likelyhood of it being exploited. We could also use data-based models like EPSS to gauge if a vulnerability will likely be exploited in the next 30 days.
“Not all vulnerabilities affect the same.”
Vulnerabilities affect different products from different vendors. Products which may be very common in Enterprise may not be common in consumer environments and vice versa. Vulnerabilities in libraries or dependencies may have a wider impact due to being included across multiple different products and even different domains like was the case with Log4J, for example. Vulnerabilites that are specific to a certain domain like ICS or IoT would have a higher impact for someone who is in certain sectors. Apart from this, vulnerabilites that affect the server and client behave differently and have their own challenges, when it comes to patching and mitigating them.
“Not all vulnerabilities are popular.”
If a vulnerability becomes popular it is usually when it’s actively being exploited by Threat Actors, or if security agencies such as CISA releases alerts or advisiories against them based on evidence that they are being exploited. However, the social media bubble is also a good source to gauge if a vulnerability is popular or not (cvetrends.com, for example). However, there are challenges to implementing these as well.
Okay, so we now have a good idea as to what factors could impact vulnerability prioritization. How do we put these into actions?
Taking the score from CVSS as we have seen gives us a rating of either Critical or High, however those are not actionable directly. Do we cover all the critical ones? What is needed is a somewhat transparent, logical and automatable decision-making process which helps determine if the vulnerability is of high value.
SSVC
Stakeholder-Specific Vulnerability Categorization (SSVC) is an approach which provides a logical vulnerability analysis methodology which in simple terms is a decision tree which we can construct depending upon our specific requirements. CISA also claims to use SSVC to better prioritize their vulnerability response.
For ATI, we use SSVC along with our own factors to come up with decisions as well as a ranking to choose which one to prioritise amongst them.
The ATI decision tree could be represented as:
Fig 5: A part of the Decision Process followed.
In the image example above, we see that if a CVE is actively being exploited, has a high popularity, falls under our preference sector, and is high in our custom ATI factor along with the CVSS ranking to take a decision to either cover or not to cover a given CVE. Finally, we filter these with the availability of Proof of Concepts or our bandwidth to conduct research on them.
Conclusions
This blog post highlights a large number of the considerations that are taken when selecting a vulnerability to become a Strike. The implications of which vulnerabilities to select are wide since those Strikes will be permanently in a highly used library that many vendors will use over the years. This is not taken lightly.
Leverage subscription service to stay ahead of attacks
Keysight's Application and Threat Intelligence (ATI) Subscription provides daily malware and bi-weekly updates of the latest application protocols and vulnerabilities for use with Keysight test platforms. The ATI Research Centre continuously checks threats as they appear in the wild to help keep your network secure . More information is present here.