At Least One Open Source Vulnerability Found In 84% of Code Bases, Report Finds (csoonline.com) 33
L.Kynes shares a report from CSO Online: At a time when almost all software contains open source code, at least one known open source vulnerability was detected in 84% of all commercial and proprietary code bases examined by researchers at application security company Synopsys. In addition, 48% of all code bases analyzed by Synopsys researchers contained high-risk vulnerabilities, which are those that have been actively exploited, already have documented proof-of-concept exploits, or are classified as remote code execution vulnerabilities. The vulnerability data -- along with information on open source license compliance -- was included in Synopsys' 2023 Open Source Security and Risk Analysis (OSSRA) report (PDF), put together by the company's Cybersecurity Research Center (CyRC). "Of the 1,703 codebases that Synopsys audited in 2022, 96% of them contained open source," adds L.Kynes, citing the report. "Aerospace, aviation, automotive, transportation, logistics; EdTech; and Internet of Things are three of the 17 industry sectors included in the report that had open source in 100% of their audited codebases. In the remaining verticals, over 92% of the codebases contained open source."
Count has a bug (Score:1)
"Aerospace, aviation, automotive, transportation, logistics; EdTech; and Internet of Things are three of the 17 industry sectors included in the report that had open source in 100% of their audited codebases. In the remaining verticals, over 92% of the codebases contained open source."
Maybe it has to do with using words for numbers (three) in place of numbers (17). Did they use an open-source library to digitize things?
Re: (Score:1)
That's a style guide recommendation. It really is easier to read small numbers as words rather than numbers when mixed in with text. Though I would've written seventeen as well. Should I have written something as fluffy as this, anyway.
I'm not sure what this fluff piece is trying to say. "Almost every commercial code base uses at least some open source, and that open source has bugs in it" seems like a stupidly trite thing to say. This the sort of thing CSOs are supposed to faff about with?
Re: (Score:2)
Sounds like there is a lot of theft of open source code by commercial software and many companies need to be sued over it.
Re: (Score:2)
The reference was to using "three of 17" after listing seven (7) items. Which four (4) don't count?
But whatever.
Re: (Score:2)
open a i
consider applying to aspects of software.
for example.
standards.
documentation.
and testing.
Question (Score:3)
Re: (Score:3)
Different attack modes, though, if we're thinking about malicious supply chain attacks as opposed to normal bugs.
If a bad guy has worked out a plausible looking bit of code that introduces a hidden vulnerability, it's way easier to get it into a public github repo than inside the black box of a proprietary system.
Re:Question (Score:4, Interesting)
plausible looking bit of code that introduces a hidden vulnerability, it's way easier to get it into a public github repo than inside the black box of a proprietary system.
You're making two basic assumptions here.
First, that the up-front cost of getting an employee into position is higher than the cost: of getting a contributor into position.
Second, that the code review for the pulled vulnerability is equally difficlult to pass at the company, compared to the process on github.
I don't think the answer is obvious. I can think of ways to measure a bunch of things related to both assumptions.
Re: (Score:3)
As others mentioned, this is an apples to oranges comparison.
Basically anyone can commit code to open source libraries. Granted, they need to gain trust, or gain someone else's credentials. But those are easy for many bad actors, especially state actors. And, even if ultimately caught they can easily claim innocence.
Remember the major OpenSSL bug that broke the Internet?
Re: (Score:3)
Basically anyone can commit code to open source libraries.
Just as anyone can commit code to proprietary libraries. In Open Source you can at least track the change and learn who made a specific commit, with proprietary libraries you take the software "as is" and can't possibly know who wrote any specific line of code.
Re: (Score:3)
I don't think the closed source code is that much more well protected. Heartbleed was caused by a mistake by PhD student that passed code review. I guarantee you that the same thing happens all the time in closed source projects.
Maybe you could have a case if the big vulnerability was maliciously injected, that it's easier for black hats to submit code to arbitrary codebase than it would be to submit to a proprietary codebase. However in practice, they find it safer to just watch the flood of mistakes to
Re: (Score:2)
I don't think the closed source code is that much more well protected. Heartbleed was caused by a mistake by PhD student that passed code review. I guarantee you that the same thing happens all the time in closed source projects.
Regardless of any unnoticed flaws in submitted code heartbeats are useless for stream based TLS sessions. The real failure from my perspective was not the presence of the flaw but why something intended for DTLS was not limited to DTLS. Bugs get missed in code reviews all the time yet shit like this especially in a security stack should have been flagged.
Seems like an ad (Score:4, Insightful)
News at 11.
Re: Seems like an ad (Score:3)
Yup. Definitely an ad. This was almost as bad as when slashdot kept posting stories from one particular writer who was an employee of a sevurity firm.
Re: (Score:1)
It is not an ad. I was just captivated by this report. The report could be better, but I like that someone is digging into this kind of data.
Re: (Score:2)
Indeed. It is both fuzzy and appeals to emotion, while pretending to give information. Typical ad for (not so great) tech people.
To be honest, I did not really understand what they were saying. And I am an IT security expert.
Known vulns (Score:3)
What the take away from this is that they are reusing third party code but have not updated that code to fix known vulnerabilities even when updates are available.
We have no idea how many as yet unknown vulnerabilities might exist in their code.
Re: (Score:3)
A great deal of the projects are dinged by virtue of jQuery. Web code has a nasty habit of copying in a version of the library and then never touching it again. Note that jQuery popularity was particularly high when about the only option you had was to manually download the copy you want and save it on your site, so not even having npm as an option to maybe keep you apprised of the state of your dependencies versus repository.
Re: (Score:2)
Yep. A petty hard problem using FOSS components is to keep track of all direct and indirect dependencies. Sure, stuff that comes with your Linux distro is usually ok, but step outside of that, e.g. using a lot of web-stuff (calling these things "frameworks" is really giving them too much credit) and that does not work anymore.
No useful information in this article. (Score:1)
No examples, no reference to CVEs, just a bald assertion of scary numbers.
Re: No useful information in this article. (Score:2)
Specific CVE numbers Are listed in the PDF report which Is linked in the Story.
Re: No useful information in this article. (Score:4, Informative)
You are right, and the CVEs are:
CVE-2020-11023: If you use jQuery to load page content from an untrusted source, it may do bad things. I don't know I've seen a single web page assume they could even think about untrusted sources, and most of the risk untrusted code could do should be mitigated by CORS and good practices anyway. If a developer would be vulnerable to this being a problem, they almost certainly have even bigger problems.
CVE-2019-11358: Similar to above, but even more insane, as you'd have to be loading and executing untrusted javascript in the first place before you are at risk. The malicious javascript code has no need to mess with this vulnerability, it already is running with as much privilege as it could ever gain.
The third on their list? Rejected by NIST (that's why they only have BDSA rather than CVE). So NIST that gives out CVEs even for some particularly stupid stuff rejected whatever the hell it is that Synopsys claimed as a big scary top three vulnerability.
CVE-2015-9251: Ok, so you have an XSS, relatively legitimate. It does still require the victim site to be deliberately loading 'data' from a malicious third party site without declaring it as 'data' triggering the 'eval' rather than JSON functions. I'd wager that the vast majority of scanned codebases aren't even in theory at risk, as it is unlikely they are pulling data from untrusted sources, and/or they might be using dataType argument which negates the behavior. BBDA doesn't know if the scenario is applicable to the codebase at hand, either by not calling it in a dangerous context, or calling it a safe way. In a way, it could be considered marking glibc as an unacceptable security risk, because it has strcpy implemented.
Of course, broadly speaking, there's no sane reason to run jQuery in this day and age, it is a pretty crusty library that lost its relevance as internet explorer died.
Re: (Score:1)
CVE-2020-11023 is specifically about passing untrusted HTML — even after sanitizing it — to jQuery. This is supposed to be a safe operation and was fixed by the jQuery developers in version 3.5.0 <https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/ [jquery.com]>.
My point is: CVE-2020-11023 is an example of a valid security issue and your summary is misleading. Of course, one should not blindly use untrusted content, but it should be safe once mitigated by good practices such as sanitization.
Re: (Score:2)
My point is that the risk isn't in proportion to the scare. For the headliner there, you have to deliberately fetch a string from a malicious third party, use a jQuery DOM manipulation function with that malicious string, and not have a content security in place that would block it. This seems an unlikely chain of events for a sample codebase to fall into, even if the possibility shouldn't exist.
Broadly speaking, I think if your security depends on sanitation, you have a problem. Sanitation it suggests y
Re: (Score:1)
Thanks for clarifying. I agree with the points made in your elaboration.
I'll just note, my experience has involved allowlist-based sanitization of HTML content entered via a rich-text editor or loaded server-side from RSS or Twitter feeds. The sanitized HTML was stored in a database and later loaded via AJAX and injected into the DOM via jQuery's problematic DOM methods. The original code was written by colleagues and my fix was to switch to JsonML [wikipedia.org] using a library I wrote that uses safe DOM methods and avoi
Ad for Black Duck Binary Analysis (Score:5, Interesting)
Having used BDBA, I'm not particularly confident in their claims.
95% of findings are false positives. It's version detection fails and assumes a component has all the vulnerabilities ever, and even if you provide it and it's from a distribution that backports, it still flags it as bad because they don't track backporting by the likes of RedHat. In one case it has been rather insistent that one library is an entirely different one altogether, and flags it as having vulnerabilities from 10 years before the library even existed. Even when it's technically a version where that vulnerability may apply, it's often associated with a compile time option that no one included, so it isn't applicable.
Then there's the sorry state of CVEs in general. the majority are nothing to worry about. For example the humble text editor vim has been hit with 'critical' vulnerabilities. Several of those is if you have write access to write executable files in Windows and access to run vim, you can have vim run those executable files. This is defined as 'critical', but I can't *imagine* a scenario where you have all those privileges already and couldn't just directly run the file. Several of them are similar, vim (note an editor that expressly has a feature deliberately to allow external commands) had some vulnerabilities where you could run commands in unexpected ways, and those too were considered critical. So of course a tool like black duck will throw all sorts of warnings about vim, despite there being no actual incremental risk.
Re: (Score:2)
Or you're using openssl in your linux app and there's a vulnerability with a specific configuration when run on Novell.
These tools are great for finding open source code so you can ensure you're complying with the license terms.
They're trying to have a checkbox for being a security tool. If someone in your org is looking at that aspect, you'll spend far more time refuting why its not a security issue than finding actual issues.
Re: (Score:2)
Indeed. Hard to fault them, it's a pretty good grift. No one would dare stand up and question whether it's a good use of time when that also means being accused of risking insecurity.
Even talking it at face value, the counter point wood be that sure it makes you waste your time 95% of the time, but what about the occasional actual finding, isn't it worth it?
Though if BDBA has real findings, you have got bigger problems in your build/sourcing/packaging process. Avoiding BDBA babes can lead to bad consequen
The primary issue using\linking in old code (Score:1)
I've seen many niche enterprise apps that link in extremely old windows .dll libraries (decades old) from other vendors.
The primary issue is using\linking in old code (open source or not doesn't really matter) and never maintaining or reviewing it.
This is less of an issue in maintained open source apps.
So you're saying you could find the bug... (Score:2)
Colour me surprised! It's almost as if you can't improve software if you can't look at it!
What does this even mean? (Score:2)