> This opens the door for a lot of infosec drama. Some of the organizations that issue CVE numbers are also the makers of the "reported" software, and these companies are extremely likely to issue low severity scores and downplay their own bugs.
It is true but the reverse is also true. It may be very hard for an external body to issue proper scoring and narrative for bugs in thousands of various software packages. Some bugs are easy, like if you get instant root on a Unix system by typing "please give me root", then it's probably a high severity issue. But a lot of bugs are not simple and require a lot of deep product knowledge and understanding of the system to properly grade. The knowledge that is frequently not widely available outside of the organization. And, for example, assigning panic scores to issues that are very niche and theoretical, and do not affect most users at all, may also be counter-productive and lead to massive waste of time and resources.
Very true. So many regulated/government security contexts use “critical” or “high” sev ratings as synonymous for “you can’t declare this unexploitable in context or write up a preexisting-mitigations blurb, you must take action and make the scanner stop detecting this”, which leads to really stupid prioritization and silliness.
Yup. Almost every single time NVD came up with some ridiculously inflated numbers without any rhyme or reason. Every time I saw their evaluation it lowered my impression of them.
Every month when there is a new Chrome release, there is a handful of CVSS 9.x vulnerabilities fixed.
I'm always curious about the companies that require vendors to report all instances where patches to CVSS 9.x vulnerabilities are not applied to all endpoints within 24 hours. Are they just absolutely flooded with reports, or does nobody on the vendor side actually follow these rules to the letter?
The deluge of new security reports is somewhat of a pain in the butt for those of us who have written notable open source software decades ago that is still in use. I recently got about a dozen reports from one reporter, and they look to be AI-assisted reports.
Long story short, the reports were things like “If your program gets this weird packet, it takes a little longer than usual to free resources”. There was one supposed “packet of death” report which I took seriously enough to spend an afternoon writing a test case for; I couldn’t reproduce the bug and the tester realized their test setup was broken.
There seems to be a lot of pressure for people to get status by claiming they broke some old open source project, to the point people like me are getting pulled out of retirement to look at issues which are trivial.
The NVD was an absolutely wretched source of severity data for vulnerabilities and there is no meaningful impact to vendors/submitters supplying their own CVSS scores, other than that it continues the farce of CVSS in a reduced form, which is a missed opportunity.
So first off - NVD has been sliding for a long time now. This has nothing to do with mythos. The amount of money that goes into this program for the output is straight up criminal.
For a very long time the security world has basically given up on defense and relies on prioritizing cves. This is wrong on so many different levels.
a) You can't scan for things you don't know that exist.
b) Malware, like all the supply chain issues in the past few months don't have cves to begin with but they are still massive security issues. That is to say the cves themselves don't really address everything. So you end up with IOCs but those are also totally worthless if it's the first time you are seeing something. You have to have proactive defense if you actually care.
c) There are quite a few cwes that you can outright prevent through various defensive means but for whatever reason organizations won't. This is an organizational issue - not a technical one. This might be one of the main benefits of the cve program in that it starts to penalize organizations through insurance and other means by tracking it and this is exactly how a lot of the security world operates.
I'm cautiously optimistic that the world is going to start looking at stronger proactive defensive measures rather than relying on this reactive scanning approach.
Enriching does a few things, but the main ones are adding CVSS information and CPE information.
CVSS (risk) is already well handled by other sources, but CPE (what software is affected) is kind of critical. I don't even know how they're going to focus enrichment on software the government uses without knowing what software the CVEs are in.
Mitre used to issue CVEs within 24 hours. I am going on 4 months now with no follow up, and no way to tell them GitHub issued a CVE already… I’m pretty sure they were just rubber stamping before. Considering disclosure normally should be coordinated with maintainers, 3rd parties like Mitre don’t seem to have much to offer or much to gain other than being a bottleneck.
Im close to Security MVP for EU parliment, listening on weekend bbq how stupid and pointless vast majority of CVEs are and how stupid and pointless majority of reports are - thank god someone wants to put an end to this.
Majority of researchers dont care how important the bug is, everyone wants something to put on CV, they get paid extra by companies to finding bugs in SAP or SalesForce that will never ever ever be used for anything.
Pointless moot just to generate noice. Like 90% of whole infosec sector.
At least thats what I understood from discussions with someone who has many nations security at stake at work.
> Going forward, NIST says its staff will only add data—in a process called enrichment—only for important vulnerabilities.
Now - I am not saying I disagree with everything here, mind you; I guess everyone may agree that CVEs may range in severity. But then the question also is ... what is the point of an organisation that is cut down to, say, handle 1% of CVEs - and ignore the rest? Why have such an organisation then to begin with?
I don't have enough data to conclude anything, but from a superficial glance it kind of seems like trying to cut down on standards or efficiency.
54 comments
> This opens the door for a lot of infosec drama. Some of the organizations that issue CVE numbers are also the makers of the "reported" software, and these companies are extremely likely to issue low severity scores and downplay their own bugs.
It is true but the reverse is also true. It may be very hard for an external body to issue proper scoring and narrative for bugs in thousands of various software packages. Some bugs are easy, like if you get instant root on a Unix system by typing "please give me root", then it's probably a high severity issue. But a lot of bugs are not simple and require a lot of deep product knowledge and understanding of the system to properly grade. The knowledge that is frequently not widely available outside of the organization. And, for example, assigning panic scores to issues that are very niche and theoretical, and do not affect most users at all, may also be counter-productive and lead to massive waste of time and resources.
> It is true but the reverse is also true.
Yup. Almost every single time NVD came up with some ridiculously inflated numbers without any rhyme or reason. Every time I saw their evaluation it lowered my impression of them.
I'm always curious about the companies that require vendors to report all instances where patches to CVSS 9.x vulnerabilities are not applied to all endpoints within 24 hours. Are they just absolutely flooded with reports, or does nobody on the vendor side actually follow these rules to the letter?
Long story short, the reports were things like “If your program gets this weird packet, it takes a little longer than usual to free resources”. There was one supposed “packet of death” report which I took seriously enough to spend an afternoon writing a test case for; I couldn’t reproduce the bug and the tester realized their test setup was broken.
There seems to be a lot of pressure for people to get status by claiming they broke some old open source project, to the point people like me are getting pulled out of retirement to look at issues which are trivial.
For a very long time the security world has basically given up on defense and relies on prioritizing cves. This is wrong on so many different levels.
a) You can't scan for things you don't know that exist.
b) Malware, like all the supply chain issues in the past few months don't have cves to begin with but they are still massive security issues. That is to say the cves themselves don't really address everything. So you end up with IOCs but those are also totally worthless if it's the first time you are seeing something. You have to have proactive defense if you actually care.
c) There are quite a few cwes that you can outright prevent through various defensive means but for whatever reason organizations won't. This is an organizational issue - not a technical one. This might be one of the main benefits of the cve program in that it starts to penalize organizations through insurance and other means by tracking it and this is exactly how a lot of the security world operates.
I'm cautiously optimistic that the world is going to start looking at stronger proactive defensive measures rather than relying on this reactive scanning approach.
"Enrichment" apparently is their term for adding detailed information about bugs to the CVE database.
CVSS (risk) is already well handled by other sources, but CPE (what software is affected) is kind of critical. I don't even know how they're going to focus enrichment on software the government uses without knowing what software the CVEs are in.
Majority of researchers dont care how important the bug is, everyone wants something to put on CV, they get paid extra by companies to finding bugs in SAP or SalesForce that will never ever ever be used for anything.
Pointless moot just to generate noice. Like 90% of whole infosec sector.
At least thats what I understood from discussions with someone who has many nations security at stake at work.
> Going forward, NIST says its staff will only add data—in a process called enrichment—only for important vulnerabilities.
Now - I am not saying I disagree with everything here, mind you; I guess everyone may agree that CVEs may range in severity. But then the question also is ... what is the point of an organisation that is cut down to, say, handle 1% of CVEs - and ignore the rest? Why have such an organisation then to begin with?
I don't have enough data to conclude anything, but from a superficial glance it kind of seems like trying to cut down on standards or efficiency.
Maybe not in english or smth