Are Network Security Devices Endangering Orgs With 1990s-Era Flaws? (csoonline.com) 57
Critics question why basic flaws like buffer overflows, command injections, and SQL injections are "being exploited remain prevalent in mission-critical codebases maintained by companies whose core business is cybersecurity," writes CSO Online. Benjamin Harris, CEO of cybersecurity/penetration testing firm watchTowr tells them that "these are vulnerability classes from the 1990s, and security controls to prevent or identify them have existed for a long time. There is really no excuse."
Enterprises have long relied on firewalls, routers, VPN servers, and email gateways to protect their networks from attacks. Increasingly, however, these network edge devices are becoming security liabilities themselves... Google's Threat Intelligence Group tracked 75 exploited zero-day vulnerabilities in 2024. Nearly one in three targeted network and security appliances, a strikingly high rate given the range of IT systems attackers could choose to exploit. That trend has continued this year, with similar numbers in the first 10 months of 2025, targeting vendors such as Citrix NetScaler, Ivanti, Fortinet, Palo Alto Networks, Cisco, SonicWall, and Juniper. Network edge devices are attractive targets because they are remotely accessible, fall outside endpoint protection monitoring, contain privileged credentials for lateral movement, and are not integrated into centralized logging solutions...
[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...
Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.
Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "
The article ends with reactions from several vendors of network edge security devices.
Thanks to Slashdot reader snydeq for sharing the article.
[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...
Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.
Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "
The article ends with reactions from several vendors of network edge security devices.
Thanks to Slashdot reader snydeq for sharing the article.
Nobody wants to look at legacy source code (Score:5, Informative)
The only people willing to read through 1990's-era source code these days are the hackers :)
Re: Nobody wants to look at legacy source code (Score:5, Insightful)
Normally, developers are focused on making the product do something, but security is the inverse: it's making sure the product cannot do some things.
It's difficult enough to hire good developers who can make products that do stuff, but hiring ones can ensure it doesn't do anything bad requires that you find the people who really knows their shit and have the imagination to identify all the things a product shouldn't do.
Likewise, organizational leadership, project management, QA, etc, have got to be bought into it.
Re: Nobody wants to look at legacy source code (Score:4, Insightful)
Normally, developers are focused on making the product do something, but security is the inverse: it's making sure the product cannot do some things.
In addition to your points, the customer (in general) has no way of knowing if a product is secure or not. It's beyond their capability. So spending time and money to make a product secure reduces profits.
Re: (Score:1)
I disagree. Having some competent sysadmin read through the last year of security alerts gives you a nice picture of whom to avoid. The main question is whether anybody is left. But then a competent sysadmin can still do it by themselves.
My guess is that most of the MBA idiots in C-level positions think they can run IT without that competent sysadmin or while having far too few of those.
Re: (Score:1)
Indeed. It is orders of magnitude harder to make a secure system and software than just ones that "do something". In addition, it is also orders of magnitude harder to test. And even for "do something", it seems testing is not done competently anymore these days, see the Crowdstrike disaster.
The bottom line is that secure software and systems are at the very top of the difficulty range. But in order to have a product that does not make things worse, you need people that can do it, working under conditions t
Re: (Score:2)
Why would anybody sane mod that down? Ah, I see. Just answered my own question.
Re: (Score:2)
Yep. I don't know how many times I've seen "hack like it's 1999" or words to that effect in security talks.
Also, many security devices (and security apps) are secure by executive fiat, not through actually being, you know, secure. It's a security appliance, it has to be secure, sez so in the name!
Beg for it, pledge. Beg. (Score:2)
The only people willing to read through 1990's-era source code these days are the hackers :)
At least the AI pledges will know what kind of shit work is coming.
Make ‘em gag chugging a forty of Clippys code with a Microsoft Bob chaser. Make those little shits earn a firewall fireball.
You got it wrong. (Score:5, Informative)
The only people willing to read through 1990's-era source code these days are the hackers :)
This isn't source code from the 1990s, that's the era for the class of vulnerabilities they are finding. In short, they are find the most basic errors make it into production in newly produced code.
Re: You got it wrong. (Score:4, Interesting)
Maybe it's a case of the experienced developers have retired or been replaced for being too expensive, and the new generation doesn't have the knowledge or experience and are having to learn it all again the hard way.
Re: (Score:2)
The "experienced developers" have created those old security holes in the first place.
Re: (Score:2, Interesting)
Yes. This is extreme incompetence in the whole organization. It starts with incompetent "managers" getting hired and they then hire incompetent "engineers" and, on top of that, create unsuitable working conditions. It rally is a whole crap-fest of incompetence all around. All to make a few more bucks in the short term.
Mandatory XKCD (Score:1)
https://xkcd.com/2347/ [xkcd.com]
Re: (Score:3)
It
Re: (Score:2)
Security wise, isn't it a good idea to rewrite things every so often so at least some guy that quit 25 years ago isn't the only one that knows how it works
No.
No, publicly traded corporations are. (Score:4, Interesting)
Blaming the devices is like blaming soup for being too hot. The actual culprits are the publicly traded corporations that are selling the devices. People can hand wave about how making "perfect code" is "impossible" but secure code doesn't have to be perfect.
The basic problem is that profit is seen as the priority which is why unfinished products are pushed out before they are ready and "old" products are rapidly abandoned. The only way to get something reliably from a publicly traded corporation is if it's the most profitable option for them. Security is something you will never be able to get from a publicly traded corporation.
Re: (Score:2)
I mean a non-perfect code that still isn't awful would, for example, not backup firewall configs plain text to the cloud.
There is also a moral difference between missing a flaw and not bothering to look for flaws in the first place.
I would bet dollars to pennies that quality control these days is shoddy (cause expensive) no matter the field. I wouldn't be surprised if they built nuclear plants with internet facing control software based on shady git packages.
These days, nothing really surprises me anymore.
Re:No, publicly traded corporations are. (Score:4, Insightful)
And the real problem is that they have zero liability, zero regulation, zero standards that need to be followed and zero qualification requirements for the people that work on these systems.
History shows us that any engineering discipline needs a few really bad catastrophes to grow up. IT is no different. Better hold on to your hats.
Re: (Score:2)
Blaming the devices is like blaming soup for being too hot. The actual culprits are the publicly traded corporations that are selling the devices. People can hand wave about how making "perfect code" is "impossible" but secure code doesn't have to be perfect.
The basic problem is that profit is seen as the priority which is why unfinished products are pushed out before they are ready and "old" products are rapidly abandoned. The only way to get something reliably from a publicly traded corporation is if it's the most profitable option for them. Security is something you will never be able to get from a publicly traded corporation.
You keep harping on "publicly traded" as if a privately-owned corporation is any different. The focus on profit as the primary concern is the same. The burning urge to ship based on cash-flow concerns even if a product isn't ready is the same.
I don't disagree with your premise, just the odd repetition of the qualifier.
Re: (Score:2)
You keep harping on "publicly traded" as if a privately-owned corporation is any different.
Yes, because they are different. Privately-owned corporations may be better or worse but it's entirely dependent on the goals of the leadership. With publicly-traded corporations, the goals are already known: profit over everything else.
The focus on profit as the primary concern is the same.
No, it's not the same because not all executives in privately-owned corporations don't have to worry about a quarterly earnings report could oust them from their position. Some privately-owned corporations mirror the behavior of publicly-traded corporations but not all.
I don't disagree with your premise, just the odd repetition of the qualifier.
The d
Full OS devices (Score:2)
How many of the devices which were breached has a full OS on them?
How many of the devices needed a full OS on them and not just a small part of the OS to do the device's job?
It's probably cheaper and faster to land linux on a device then customize that it is to created a dedicated bare metal to the bare minimum needed code software for a device.
Corporate calls in most likely with a "We need it yesterday."
Re: (Score:2)
How many of the devices needed a full OS on them and not just a small part of the OS to do the device's job?
Actually, for what they are doing, they do need a full OS. I honestly don't know how you couldn't figure that out on you own.
Re: (Score:2)
Somewhat disagree. Shipping a full linux system for a specialist task is not needed and opens a larger attack surface area.
Re: (Score:2)
You should take a class on operating systems so that you understand what an OS really is. It can be extremely restrictive and minimalistic but you still need a full OS because there are many tasks that must be handle simultaneously. You can argue that the current OSes used are too expansive but arguing against needing an OS is idiotic.
To few good programmers (Score:4, Interesting)
I teach students how to avoid these kinds of flaws in my basic programming courses. Most students don't understand the importance, or don't care, or are actually incapable of avoiding these issues in their programs.
It's yet another aspect of a well-known issue: We have a massive demand for software, but very few programmers are actually competent. I've taught in high-quality degree programs: maybe 10% of the students are really good, and another 30%-40% could contribute competently - as long as they are supervised by someone good. Those are self-selected students in high-quality degree programs.
In less technical degree programs, where I also teach some programming courses, I feel fortunate to have 1 or 2 students who have any real potential. The other learn to copy-and-paste (or, now, use AI), without any real understanding of what they are doing. These make up the vast majority of programmers out there, and they are the reason why injection attacks are still a thing.
FWIW: This is especially true in Asian software sweat-shops: rooms full of people pasting in code with no clue what they are doing, while their boss walks around looking over their shoulders, telling each person what their next task is. Push out code, fast and cheap, that's all that matters.
Re: (Score:2)
Every webmonkey is capable of only ever using SQL prepared statements (which was the fix for CVE-2025-25257 for instance) and while trying to find queries which are subject to injection has false positives and negatives, the much simpler "only use prepared statements" is not. Try to suggest that solution to the average webmonkey and they start complaining they don't need to be constrained like that, very much like C programmers in that respect.
You need to start teaching management all programmers are habitu
Re: (Score:2)
Try to suggest that solution to the average webmonkey and they start complaining they don't need to be constrained like that
That's fine, they don't need to use prepared statements if they don't want to, but they do need to explain what method they use to prevent SQL injection attacks. There are other methods. If they don't have a coherent answer, they need to use prepared statements because that works.
Re: (Score:2)
Even with explanations and careful thought and careful everything, there are a ton of old and new ways to get it subtly wrong, especially after a couple of edits, which they will stumble upon by accident eventually.
It's not fine, it's wrong.
Re: (Score:2)
Re: (Score:2)
There's only decades worth of proof that it's incredibly easy to screw up ... I am sure you are the exception to the rule, but other people will be allowed to edit your code.
As I said in the first place "Try to suggest that solution to the average webmonkey and they start complaining they don't need to be constrained like that, very much like C programmers in that respect." ;)
Re: (Score:2)
If a new guy comes in and thinks he has a better way, give him the requirements and see what he comes up with. Chances are he'll end up using prepared statements (or any of the many alternatives that are just as good), but then he'll understand why.
Re: (Score:2)
You would think that, by now, parsers would simply refuse to accept any statement that was vulnerable to injection without suitable inspection, have automatic de-detainting and other measures to guard against well-known techniques.
But they don't.
Why is that?
Re: (Score:2)
Text based APIs are fundamentally incompatible with human nature and automated checking. Infinite easy ways of doing it wrong, more than thought of in your automated checker, especially if you don't want to be buried in false positives.
I've only ever found one person who is also right about this. Erik poll, in "Strings Considered Harmful". SQL is wrong, shell is wrong, printf is wrong, HTML is wrong, all wrong and almost no one can even recognise it. Just as wrong as defaulting to pointer based programming,
Re: (Score:2)
Interesting. Do you have advice on how to judge the quality of a piece of software, without seeing the source code, but being able to ask other questions?
Re: (Score:2)
Interview the coders while being a competent security coder yourself. I have done that several times. It works nicely.
Re: (Score:2)
Interview the coders while being a competent security coder yourself. I have done that several times. It works nicely.
I was hoping for other proxies, but yes I too see that's a good method. I wouldn't rate my own coding skills, but when I've had chance to speak to people, and ask questions like, so the pentest revealed this bug two years ago, which you fixed back then, but now the latest test this year, reveals the same class of bug again, so what happened, did this code not exist back then? And they say, oh yes it existed. And I'm like, so didn't it occur to you back then to search your code for the same class of bug in o
Re: (Score:2)
Yes. You get a feel for what their mind-set and insight-level is and what conditions they work under. Personally, I do not think you can really do anything else to get good results.
Re: (Score:2)
First, you'd have to define "quality".
It's harder than it sounds.
Re: (Score:2)
The kinds of common errors which the article suggests shouldn't be happening. Like the 2023 MOVEit snafu.
So the simple version of quality, not the Zen and the Art of Motorcycle Maintenance version.
Re: (Score:2)
ZAAM was specifically what I was thinking of when I posted.
Props to you (;
Re: (Score:2)
Cool :) I must try to reread that again one day.
Re: (Score:2)
It's yet another aspect of a well-known issue: We have a massive demand for software, but very few programmers are actually competent.
That is one part of the problem. The other is no liability, no regulation, no standards that need to be followed and no qualification requirements. Mess up the calculations for a bridge and it collapses? If you were missing the qualifications, you might well go to prison and rightfully so. Make an utterly dumb testing mistake that costs your customers $20B? Nothing happens.
Outsourced and AI generated code (Score:4, Interesting)
Wrap the legacy device in a Linux wrapper. (Score:1)
When I interview programmers (Score:4, Insightful)
I always ask two questions:
1. If you were a hacker trying to break into a website using SQL injection, what would you type, exactly?
2. How do you prevent SQL injection, exactly?
Though I mostly hire senior developers with years of experience, who claim to be great database developers, only about 25% can answer these two questions correctly.
On Question 1, they usually tell me they would try typing a SQL query in a data field. But when I press them to tell me how they get it to be treated as a command rather than as data, they usually don't know that it's really just an apostrophe followed by a semicolon.
On Question 2, they usually tell me the data should be sanitized, to prevent SQL statements from being entered. When I get that answer, I ask them what they would do, if the field is *intended* to be a place to store SQL statements (such as a Jira comment field). Most have no idea that using query parameters is *the* way to protect against SQL injection.
If senior developers don't know these basic facts about SQL injection, it's no wonder our devices and systems are rife with vulnerabilities. Personally, I won't hire a developer who can't answer those two questions correctly.
Re: (Score:2)
Jesus!
I knew your two answers precisely and:
1). I was never a great developer to begin with.
2). I last worked in this area in the 1999-2001 era.
3). I'd never get a job as a developer these days because I'm over 50 and my technical skills are way out of date.
FWIW, I tossed in the towel when Hibernate became a thing and I couldn't deal with that level of abstraction. I didn't want to do Java that way. Said "fuck this". Quit my job as a developer and got a job as a technical writer.
Re: (Score:2)
Yeah, it shocks me too, how little some "senior" developers actually know.
I too dislike Hibernate, and for that matter, every ORM. On one hand, ORMs *do* give you SQL injection protection for free, because everything they do is parameterized. But that's about the only good thing they do. They slow down performance because the queries they write are atrocious. They are almost impossible to debug if they have any complexity. AND they take away your ability to develop your code in independent layers. If you to
Ah, yes, "insecurity by security device" (Score:2)
I created part of a lecture on that a few years back. I thought initially I would have to go though a month or two of alerts to find enough. Turns out I had enough after a week and only went to two weeks for a bit of diversity.
Makers of "security" devices are really the most incompetent fucks when it comes to secure coding and system design. Without liability, regulation and standards, this will not change.
Prioritizing time-to-market and cost over security (Score:2)
This reads like an ad for SASE solutions (Score:2)
One area I've seen fall short in authentication between SASE solutions and say, Palo Alto's GlobalProtect VPN is GlobalProtect *in addition* to a SAML flow will also do a client certificate validation flow. And this works with hardware-backed private keys on TPMs in Windows (using the Microsoft Platform Crypto Provider). SASE soluti
We had one single vunerablity (Score:2)
"We're all fine here, how are you?" (Score:2)
If you look at the original article (linked to in the post), it contains responses by a few of the security companies whose software was found faulty. Predictably, they say something like "We take security seriously, and we're working hard every day."
Until someone gets sued for malpractice (figuratively), this probably won't change.