Tear Down the Firewall 395
lousyd writes "'What's the best firewall for servers?' asked one Slashdot poster. 'Give up the firewall' answers Security Pipeline columnist Stuart Berman. Through creatively separating server functions into different, isolated servers, and assigning them to a three tiered system of security levels, his company has almost completely eliminated the need for (and headache of) network firewalls. "Taking that crutch away has forced us to rethink our security model," Berman says. The cost of the added servers is greatly minimized by making them virtual servers on the same machine, using Xen. With the new security-enhanced XenSE, this might become easier and more possible. What has you chained to your firewall?"
Band-aid (Score:2, Informative)
Re:Band-aid (Score:2, Insightful)
Which is what netstat -at and firewalls do...
Re:Band-aid (Score:2)
Firewalls ar a perimiter defence, not something that makes much sense on a server itself.
If you ever dealt with a somewhat sophisticcated compromise like I have quite a f
Does SANE support the Scanmaker 4850 yet? (Score:5, Insightful)
And if you have processes running and listening on ports that you don't want or need, why are you running them?
Because the operating system that you run is incapable of turning them off, and no other operating system is compatible with a mission-critical application or hardware device?
Re:Does SANE support the Scanmaker 4850 yet? (Score:5, Insightful)
Also, firewalls are good for if you have networks which need to do a lot of internal talking on potentially hazardous ports, but don't want the rest of the world to talk on those ports. Think big application platforms.
Of course but (Score:2)
Re:Of course but (Score:5, Informative)
netstat -v -o -n -b -a
(you can ommit -v for a quicker display)
NeoThermic
Re:Of course but (Score:2)
Actually, it would be more along the lines of "lsof -i -n"
Just knowing the ports won't tell you which process is responsible for them.
I'm sure there's a Windows equivalent if you use that
Sysinternals [sysinternals.com] has some very nifty freewares that give this info and more.
Still, I'd rather keep the firewall. I like granularity. What if I want to limit access to certain source ips? Limiting them on application level might still leave open to buffer overrun kind of vulnerabilit
Re:Of course but (Score:2)
Re:Band-aid (Score:3, Interesting)
Re:ssh (Score:2)
I admit that the hosts.files are much better in general, but I guess you could use both methods for an extra layer of security.
Re:Band-aid (Score:4, Insightful)
Re:Band-aid (Score:2)
Firewalls have nothing to do with processes running on computers. They are for filtering network packets.
The right way to solve the problem of rejecting incoming and outgoing requests is to make it easy to see which processes are accepting and making connections on which port.s
No, this is the right way of identifying which processes are sending and receiving information, not whether or not those p
Nice logic, but (Score:5, Insightful)
Re:Nice logic, but (Score:4, Insightful)
Re:Nice logic, but (Score:2)
Re:Nice logic, but (Score:3, Insightful)
Working in a bureaucracy, I've found that new rules are either ignored, or obeyed at the expense of attention to old ones. Time, attention, and willingness to compl
Re:Nice logic, but (Score:5, Insightful)
Well you could control who the processes can listen to. There's no reason an internal web server should be visible to the entire internet. Or even for publicly accessible sites, if all your customers are in the US it may make good sense to deny connections from say, romania.
Re:Nice logic, but (Score:3, Insightful)
OK, first issue. If you run any *significant* services, you have ports that need to be accessible by your machines, but nobody else's. The best example is database servers. My database runs on a separate machine. My webservers need to access it, but NOBODY else does. The database's access control is not enough, I don't even want anyone outside my network to see those ports, let alone try to muck with them.
Second issue. There are always new exploits coming up for the software you *do* have to ex
Re:Nice logic, but (Score:2)
Re:Address translation (Score:3, Insightful)
If you only have one public ip address but more than one (virtual) server, you need a firewall or router.
Re:Nice logic, but (Score:4, Informative)
Yeah, but the problem is this: what if it's your firewall admin who screws up? Granted, it's better to leave a port open on ONE device than on twenty different ones, but it's still the same problem.
I admit I'm not impressed with their notion that the workstations should just be kept patched and users authenticated before allowing access to the servers.
Still, there is something to be said for this sentence from TFA:
"By accepting that our internal network isn't much safer than a hostile external network, we've created a more realistic security architecture."
And they also do this:
"We assign each user a central identity, which is authenticated and validated before accessing the internal DMZ. We use central directories to manage identity privileges and PKI certificates. Existing systems, such as Active Directory, allow for low-cost private certificate authorities where PKI isn't well-established. We also log and monitor the activity and enforce acceptable application behavior."
In other words, if the end users have to use PKI to get into the internal network, they're basically being treated like potential intruders themselves - which is how it should be, given that much hacking is done from INSIDE the network. If your end-users are treated the same as any script kiddie, you don't have any problems separating the two except via authentication. Although I still wonder if this layout would protect from a clever hacker who does manage to penetrate and fully compromise a workstation.
Still, it should be sufficient to keep the ordinary worms and viruses off the servers - as long the worm or virus can't take advantage of a flaw in the basic network infrastructure.
And if you really ARE monitoring your network servers for bad behavior - with real human eyes instead of an IDS - instead of just paying lip service to the idea - you have the equivalent of a fully monitored system which is probably the best way to prevent intrusion. In other words, human guards AND electronics are the best security, not either one alone.
Re:Nice logic, but (Score:3, Insightful)
Besides, good security is layered, because each layer exponentially reduces the risk of a successful breach. Assuming that an operating system is safe from attack on a given port just because the system claims the p
Re:Nice logic, but (Score:3, Insightful)
Re:Nice logic, but (Score:3, Insightful)
Sigh... (Score:4, Interesting)
Or is this another Flavor of the Month event?
Re:Sigh... (Score:2)
The number 3 is unrepresentative of anything without saying how switches that's out of. In the past six months, I've probably seen more than three Cisco switches fail. However, that's in a deployment of close to two thousand Cisco devices... from the cheapest of the cheap [cisco.com] to the most expensive of any. [cisco.com] That said, devices usually fail for a reason. Maybe the closet has poor cooling (common) or maybe the
Re:Sigh... (Score:2)
Say, for example, PostgreSQL-specific error messages appear on a user's screen, or maybe some future version of the operating system breaks PostgreSQL but not SQL Server.
Or a non-technological leak, such as a loose-lipped staff member saying the wrong thing in front of the wrong person. Or worse yet a disgruntled ex-employee turning you in.
Seems like an incredible risk to take.
Nothing - not using one (Score:4, Informative)
Realistically, I'm not sure there is much more I can really do other than logging in a checking things out when ever I can (which is often).
It's worked well for me (so far), and I've had server directly on the internet since 1999. I got hit with code-red on a server once.
"Simple" ACLS (Score:5, Interesting)
Personally, I've never found ACLs as easy (or as flexible) as other firewall solutions. But in any event, ACLs are firewalls, call them what you will....
Re:"Simple" ACLS (Score:2)
If you know what's behind your firewall (and if you're running a bunch of web servers, you better know what's there) then there's no need for a firewall.
Re:"Simple" ACLS (Score:2)
If you can't set the source port for an application, and only the destination port, you might be leaving much bigger holes than you would by throwing a firewall into the mix between subnets.
Re: (Score:2)
Firewalls aren't totally expendable (Score:5, Interesting)
Re:Firewalls aren't totally expendable (Score:2)
The article discusses that. They deliberately leave the workstations exposed to the Net. The SERVERS are protected and application-level firewalls are also used. The advantage is that they don't have users continually frustrated at being unable to access various services on the Net due to the firewall blocking everything, and their admins have less work to do opening and closing ports for end user special purposes which presumably results in less configuration errors and less security holes.
The overall ef
Re:Firewalls aren't totally expendable (Score:3, Interesting)
The point is they INTEND for the workstations to be more exposed to the Net. This reflects the reality that perimeter security isn't working well. If you treat the workstations as if they're NOT secure, your security actually gets better because now you're dealing with the reality that most hacking is done from INSIDE the network - whether from internal users or compromised workstations doesn't matter.
Their security is reserved for the server tiers. The workstations are protected as well as possible using
What is XenSE? (Score:5, Informative)
What XenSE isn't:
* it's not Xen's "security issues team". It's not for patching exploits, etc.
What XenSE is:
* the "virtual machine monitor" equivalent of SELinux
* mandatory access control for virtual machines
- e.g. you might enforce some sort of information flow between virtual machines (e.g. "Top Secret" only talks to other "Top Secret")
* enforced from the very lowest levels of the system, so should be very trustworthy
The goal is that the complete XenSE system achieve a higher security rating than currently possible with SELinux alone. The initial prototype of the mandatory access controls has been supplied by IBM and is in the 3.0-testing tree right now. Fully achieving the project's security goals will take considerably longer (Xen 4.0 timeframe).
Re:What is XenSE? (Score:3, Interesting)
You won't need to wait for XenSE to achieve this, though - one of the Xen 3.x series will probably be able to do everything you want. A number of people are running a firewall in a separate virtual machine using Xen 2.0 (which can't run Windows). You're able to assign the network device directly to the firewall do
I keep my OpenBSD firewall up (Score:2)
Firewalls are needed only for leaky systems (Score:2, Insightful)
In other words, firewalls are of any use only if:
The tarp does nothing for a sturdy
I use a firewall to isolate networks (Score:5, Interesting)
So, the services are running so that I can use them from the inside (with any device on the inside, without mucking with ACLs, additional equipment aside from a switch, etc.) without having the services exposed to the outside.
Now, if you're running services which aren't being used by legitimage users at all...
Re:I use a firewall to isolate networks (Score:2)
I thought this would be done by NAT, with an internal network of nonroutable addresses. While a firewall box may also do NAT, they are quite separate functions.
Re:I use a firewall to isolate networks (Score:2)
The article deals with that:
"The servers and their respective applications sit in their own DMZ, protected by an Application-layer firewall. We organize servers into three tiers: The first tier consists of presentation servers such as Web and e-mail servers--these are the only servers accessible to end users. The second tier, made up of application and middleware servers, is in turn only accessible to the presentation servers. Finally, the third tier, consisting of the database servers, is only accessible
Re:Firewalls are needed only for leaky systems (Score:2)
you're defending a grossly insecure system (Windows?) - Adequately securing a system and then replicating that security policy across a disparate group of servers all serving different functions is not an easy task. On top of that, systems and software have bugs, Windows, Linux, BSD, *nix, all have exploits released at some point in time. With a firewall you can start by reducing your risk by limiting the traffic ON YOUR NETWORK and may be able to do some packet inspection o
Re:Firewalls are needed only for leaky systems (Score:2)
For enforcing a policy: If you control all the hosts, why would you care about people scanning you? An unsuccessful scan is an uninteresting thing due to the very fact of being scanned, and a successful one is o
Re:Firewalls are needed only for leaky systems (Score:5, Insightful)
OK, so then why did you mention that point if you are going to subsequently shoot it down with one example?
firewalls do nothing to protect services which are already visible to the network
Yes, higher-end firewalls can also scan the traffic on those open ports looking for exploits (ala IDS firewalls).
And if you want to use the firewall to block off unneeded services, why in the hell are you running them in the first place?
Are you serious? I have tons of services running on various servers that I do not want made available to the public, yet need to be available to (a) the other servers behind the firewall, and (b) trusted users that connect over our VPN... which, incidentally, is another function of a good firewall.
The article and your post are pure lunacy. It is not that hard to maintain a firewall, and as long as you plan your internal networking with the assumption that the firewall will not stop a really good hacker, it is just one more layer of security.
Re:Firewalls are needed only for leaky systems (Score:3, Informative)
And why? So you know there are exploits being run against you? And this helps how? Your goal is to prevent exploits from being SUCCESSFUL, not from being run against you, since they will be run anyway. Check your firewall logs long enough for a big enough company, you'll see every exploit there is. So what?
"I have tons of services running on various servers that I do not want made available to the public, yet need to
Re:Firewalls are needed only for leaky systems (Score:2)
Sometimes (ie, with Windows), you already know the roof leaks, and can't do a hell of a lot about it. Sometimes (ssh vulnerability, for example), the roof leaks in places you don't know, and you'll only find out when you go to look at those irreplaceable and now water-destroyed family photos you kept in a corner of the attic.
you want to make your yard free of rain
Yes, I occasionally throw summer parties (a (legit) visiting laptop connecting to my WAP). Though I have a wonderful u
Firewalls offload the servers and save big bux. (Score:5, Informative)
Beg to differ.
Firewalls unload the server from spending cycles on filtering rules and memory on surviving DDoS attacks, just to name two functions.
If the servers must do their own filtering, and you have enough load that you need more than one to get everything done, offloading the filtering to a separate machine means that you need less servers. The gain is not linear, too: Keeping multiple servers synchronized (espeically those changing database state due to the transactions they serve) is an extra load, which becomes a lower fraction of the transaction cost when the server count is smaller.
Separating the functions also means that the machines can be specialized for their work - with, for instance, hardware accelleration for attack detection on the firewall - drastically cutting the box count. Putting all the eggs in a single basket means accelleartors get less usage, since they're used only for a fraction of the machines' load. Meanwhile you need more accellerators to put one on each machine - or you're stuck with using a GP machine to do the work, at much lower efficiency and a much higher box count.
Accellerators may only be available for appliance firewall solutions, not for upgrading a machine optimized for database handling or other server tasks.
If you have a license fee for the server software, having more servers means more licenses to buy. Another cost savings from specialization - this time a big one. If both the server and firewall software is licensed you have to have licenses for BOTH on ALL machines, rather than one or the other on each machine.
If you need content filtering against specific identified attacks, you need a service from a specialist organization, to track new attacks as they arise and upgrade the filtering functions. You don't want an outside house tweaking the machines which contain your own proprietary data.
Separate machines also means separate software. The firewall software can be written by people focusing JUST on secure and efficient firewalling, the server software by people focusing on efficient transaction service. Do a combined box and your firewalling functinality is just one of a bundle of functions being handled by a software team - in the server and/or the supporting system. (You only have to look at Microsoft to see the level of security produced by the latter approach.)
I could go on. But any one of the above points, by itself, shows an advantage for the separate firewall/server approach in a commercial scale, commercial grade, service. Combine them all (and others I haven't mentined) and the argument is compelling.
Re:Firewalls are needed only for leaky systems (Score:3, Insightful)
Apparently the problem for some admins is that firewalls become a security hazard in themselves because they have to be constantly adminned by opening and closing ports for special end user purposes, which tends to introduce configuration errors and security holes. And if they don't do this, they get endless complaints from the end users that they can't access things they need (or think they need) on the Net.
And this also applies to the problem of connecting with business partners, contractors, etc., as we
Why not have both? (Score:3, Interesting)
But it seems to me that rejecting all other traffic with a firewall is a good added measure of security that can only improve the overall security of your setup. It also makes you less visible to attackers and wastes there time.
Re:Why not have both? (Score:2)
When you design a system, you place security in at as many levels you can. This is called 'Defense in Depth' and has been practised for many hundreds of years (castles were built upon the same principle).
If you can, stip your system to the bare minimun, run a network firewall and a local firewall, protocol analysis, IDS and host level solutions such as PaX all add up.
It's good to work on the idea that no systetem is undefeatable, so arranging a few different security syst
Re:Why not have both? (Score:4, Interesting)
The article makes the point that it costs money and time to "reject all other traffic" because the end users often need to access things outside the system, new applications such as Skype also need to have new ports opened, and outside visitors need to connect to the network internally which leads to security risks as firewalls are administered.
By treating EVERYBODY outside the server ring as a potential risk, you eliminate these problems and take a more proactive, paranoid approach to the security of the internal network rather than relying on perimeter security which is hard and expensive to do. At the same time, you make the network outside the server ring more useful to end users.
I can see the point - I'd just like to see it TESTED against a good-quality pen-test using compromised workstations against the server ring to see if Layer-Three switches with ACLs and PKI authentication and application firewalls are sufficient to protect the servers against island-hopping attacks by a good hacker.
Re:Why not have both? (Score:2)
PCI, CISP, ISO 17799, SAS-70 (Score:2)
The premise is simple. Multiple layers. Sure you could probably build a box that is very difficult to get into, but do you really think anything is 100% safe? If somebody wants in, I have a belief th
Re: (Score:2)
Re:PCI, CISP, ISO 17799, SAS-70 (Score:2)
He's only giving up the border firewall... (Score:5, Informative)
He removed the firewall between the Production Environment and the Internel, and is replacing it with several firewalls on the internal network. I count 4 firewalls-- One between the Webservers & Application server, a second firewall between the Application server and DB server, a third firewall between the production environment and non-production environments; and he discusses using ACLs to isolate subnets -- that's conceptually the same thing as a firewall.
But that's not a very new concept, and even with his plan, it still seems like you'd be more secure if you have an external firewall on the added network.
What's the harm in adding one more firewall and only allowing traffic on the HTTP port, HTTPS port and possibly VPN? It's cheap insurance just in case someone made a mistake and left some services running on one of the machines.
Re:He's only giving up the border firewall... (Score:2)
Security consists of layers of protection. By removing his perimeter firewall, he is removing one layer of protection. Now, he can provide all the argumnents that he wants, trying to justify the removal of the perimeter firewall. But the fact remains, he has removed one layer of protection, and has made his internal protection requirements more complex because of it.
Yea, I don't get it. (Score:2)
Not sure how he can say they "gave up" the firewalls - if it's a router doing filtering or a special "application firewall" (whatever the difference is) it's still doing *firewalling* and thus still needs to be managed.
He never really mentioned that they removed any firewalls, rea
Re:He's only giving up the border firewall... (Score:2)
Re:He's only giving up the border firewall... (Score:5, Insightful)
The "harm" is described in the article:
"Perimeter security was originally intended to allow us to operate with the confidence that our information and content wouldn't be stolen or otherwise abused. Instead, the firewall has slowed down application deployment, limiting our choice of applications and increasing our stress.
To make matters worse, we constantly heard that something was safe because it was inside our network. Who thinks that the bad guys are outside the firewall and the good guys are in? A myriad of applications, from Web-based mail to IM to VoIP, can now tunnel through or bypass the firewall. At the same time, new organizational models embrace a variety of visitors, including contractors and partners, into our networks. Nevertheless, the perimeter is still seen as a defense that keeps out bad behavior. Taking that crutch away has forced us to rethink our security model."
I can see the point. However, as always,YMMV. If you can't devote the resources to doing decent monitoring of your applications and servers, and keeping the workstations patched, then you might need a perimeter firewall.
The point of the article is that a perimeter firewall - a "moat mentality" - leads to lax security on the internal network. And it's NOT "cheap insurance" because it requires much more maintenance to secure an entire perimeter of thousands of workstations AND still provide Net access to those systems (and visitors) than it does to secure an inner ring of a few hundred servers and to treat EVERYBODY outside that ring as a threat - including your own users.
part of a larger security solution (Score:2, Insightful)
While it is true people have the wrong image of a firewall they are still very useful when
Summary (Score:5, Funny)
- Segmenting your network to
- Workstations
- Internal servers
- Internal databases etc (accessed by servers)
- DMZ
- Setting up stringent ACLs to only permit specific traffic between segments.
C'mon, this is pretty much elementary stuff. Any network adming should know to design his network like this even in small companies where you have 2 workstations and a single server.
Then he makes a claim that you don't need firewall because only things accessible to Internet (Workstations and stuff in DMZ, like your public website) are running secure OSs patched constantly. I guess they are running OpenBSD with default config then...
Only real "innovation" comes at the end: The article states that they are running some sort of IDS/IDP system in their network, presumbaly monitoring for any wormlike packets. This is nothing too interesting, anybody can set up Snort and have it running at your switch's monitor port. Only thing is that if it is running only as a logger, it cannot really react fast enough if one of your boxes gets infected with the latest worm from the completely unsecured Internet connection.
If it is running in some sort of transparent bridging mode, where it blocks those packets too on detection, it is pretty much like any...you guessed it...FIREWALL.
He DOES have a point on the fact that numerous applications require intelligent firewalls, the most basic case of course being active FTP. However, almost any commercial firewall (and Linux kernel iptables) supports numerous protocols. Most recent additions are SIP. P2P protocols are prominently missing so far, but I'm guessing that at least Bittorrent will be added soon (at least to Cisco IOS/PIX and Checkpoint).
Still, I wouldn't give too much credit for this article until he provides us with a detailed network diagram and more specifically states what are the exact benefits.
Re:Summary (Score:2)
I would say Passive FTP is more difficult to firewall on the server than active. Passive puts the responsibility of accepting an incoming connection on an arbitrary high port on the server. Active puts it on the client.
Now some FTP servers let you specify a range of passive ports to announce to the client, but that can break compatibility with some clients.
Re:Summary (Score:2)
I was naturally talking about outbound connections..the usual case is that your workstations need to access services on the Internet, but Internet only needs to get through to your DMZ.
If running a stateful firewall also for incoming connections (instead of just an ACL) the servers c
i'll give up my (openbsd) firewalls... (Score:2)
i do concede, though, that my environments are such that the internal networks (and users) *are* trustworthy.
Nothing has me chained to it (Score:2)
It's no magic bullet for sure, the services behind the open parts of the firewall have to be secure or it does no good, but it restricts the possible places an attack can occur.
Seems overkill... (Score:3, Insightful)
The post proposes a pretty novel solution---maintain separate hosts for each server---but it seems really inefficient. I mean, Xen as I understand it will run full operating systems in each of its virtual domains, including separate kernels and whatever else the system needs running.
Why not just work with chroot jails? They accomplish the same thing---keeping things isolated from dangerous interaction with the rest of the system---but without the ridiculous performance overhead of running entire and discrete systems for each service provided.
Defense in depth. (Score:5, Insightful)
Saying 'I have secured my OS, I no longer need a firewall' is like saying 'I have an airbag, thus I do not need this seatbelt'. One complements the other.
Its always a risk/reward analysis (Score:5, Informative)
At the organization I have we have NO firewall because it is designed as an environment for the deployment of services (videoconferencing, ect..) and users who need unrestricted network access to the outside world. The security policy is written so that the user is completely in charge of their system. If it becomes comprimised and we find out about it...it's disconnected.
Networks rarely are compromised but the edge devices ARE. With the exception of some vulnerabilities in routers of late, networks do what they are supposed to do.
It's NOT nework security....it should NOT be the job of the network to protect hosts from themselves. It's HOST security and the people in charge of the HOSTS are responsible. "Not my fault" you say? Windows is insecure? It's precisely this mindset which has isolated MS for so long and pushed the responsibility back on the network admins that have kept microsoft (and OS vendors in general) and application developers from being serious about securing their systems and applications.
Weve been tricked... (Score:5, Funny)
"Your not Stuart Berman, your really social engineering expert Kevin Mitnick, and you almost tricked everyone into taking down their firewalls".
"And I would have gotten away with it if it wasn't for you nosey Slashdotters!"
Re:Weve been tricked... (Score:2)
I don't run a firewall (Score:2)
Re:I don't run a firewall (Score:3, Insightful)
I can think of a few instances where you would still be vulnerable without a firewall, like if there was an exploit discovered in the network stack of the OS.
At the risk of being redundant (Score:2, Informative)
Without a firewall to block incoming-random-port traffic, client machines are still vulnerable to day-zero open-port vulnerabilities. Granted, a software firewall SHOULD prevent this but a second, independent firewall helps.
What this guy is doing is A Very Good Thing, but there's no need to turn off those external firewalls completely.
My rating of the original article:
Informative, but overstated.
Too smart for their own good (Score:5, Insightful)
They've taken a nugget of insight, that the reliance on a firewall can make you sloppy, and built a whole mountain of security policy on it. Trouble is, that's upside down architecture.
Good security is about building up as many layers as you can that are easier on you than on your attacker. The goal isn't to be impenetrable, it's to look like too much work so the attacker goes away.
We have a firewall so that we CAN be a little sloppy inside if needed. It's the balance between security and usability. It doesn't mean you rely solely on the firewall. It means that the "firewall", which you should treat more like a window screen, is just another layer of defense.
And when everyone else has a firewall, your unfirewalled network stands out like a house with no window screens.
There is another big picture here, too. If everyone has a firewall, having one doesn't make you look like you've got something to hide. If only 1% of networks were protected, then your firewall makes you look suspicious.
So thanks, but quit telling people they shouldn't use a firewall. Some of them might take your advice.
This is better? (Score:4, Insightful)
Sounds like a pain in the ass to me...
Frankly, there's too many damn buzzwords.
Why Choose? (Score:3, Insightful)
Multiple layers (Score:3, Insightful)
Of course one should strive for having one's servers secure enough to stand on their own in case someone breaks through the firewall, and also because attacks can come from within. You don't need to remove your firewall to do that, however; use your imagination! What happenes if there's a flaw in the server's built in security? Bugs have been known to happen. Paranoia becomes a wonderful trait when you're dealing with network securiity.
So a firewall is that much extra work; boo hoo!
Life without firewalls according to Abe Singer (Score:2)
Helevius
IP address wastage (Score:3, Insightful)
Petition to tear down your firewalls! (Score:2)
Post a child-post to this post listing your Slashdot user-id and the subnet that your firewall has been removed from so that we can validate that you have indeed joined the revolution.
Ingolfke - 172.16.56.0/24
--
Bot-net for sale. Contact me.
PR? (Score:2)
heh brings me back (Score:2, Insightful)
for me, a gentoo box that hasn't been around or played with long enough to have servers i don't remember running on it is easily safe enough to put up naked on the net. true, i will echo icmp and a few other in-kernel protoco
No, he's just using poor firewalls.. (Score:2)
We can do that now, thanks to layer-3 data center switches that allow for the low-cost creation of subnets. By defining simple ACLs, we further isolate our backend servers.
So, in reality, he has not given up on firewalls, he has simply transitioned to a different firewall structure based on primitive firewalling. "Simple ACLs" are neither simple nor effective.
The other point is that yes, you can create all kinds of contrived security structures if that's how you want to spend all your time/re
Problems (Score:2)
assigning them to a three tiered system of security levels
I'm curious what the justification for a 3-tier system is. Why not 2 or 4? If it's arbitrary then it may be worse than what they're trying to fix.
The cost of the added servers is greatly minimized by making them virtual servers on the same machine
But then an attack on one virtual server for a particular functionality takes out all other virtual servers on that machine? How does this fix anything?
With the new security-enhanced XenSE, t
security wants redundancy (Score:3, Interesting)
Before everyone starts posting "I've been doing that for ten years" and "of course, firewalls are teh suk", let me say that while TFA does make some good points (about "perceived safety" of firewalls), I still do not see any way that its conclusion would be correct.
First off, redundancy in security is good. You want multiple layers of security. It does not make sense to remove a layer just because you installed a different (non-overlapping) mechanism in place.
Second, firewalls are a policy enforcement mechanism, and a single point of control. Under stress it is much easier to control access from a firewall than the eclectic mix of machines behind it. The point needs to be made that while securing each machine is a good idea, that should not be done to replace the firewall.
Visible services can't be assumed to be bulletproof. Compromising the frontend machines can result in them becoming rogue agents (DDOS and whatnot). Firewalls attempt to mitigate this risk by blocking outgoing access and thus rendering the network less useful to the attacker. Without a firewall, well...
The network of machines is secure today, after a lot of careful design work. Is it stable ? Will it still be secure after the next site upgrade ?
While more complex systems can occasionally be more secure by their inherent obfuscation, verifying such systems from the inside is also difficult, but manageable given the manpower. When the security components are mutable though (they are OS services and custom software which are upgraded often), the complexity of the system works against us, making it that much harder to verify that all the combinations still result in a secure system. Not to mention that the machine verification involves application-level checking which is either laborious or impossible for the network admin to do.
From TFA: Meanwhile, the clients sit in the clear. We protect them by boosting their immunity levels so that they can exist in harsher conditions. They run secure OSs, fully patched with current anti-virus protection.
So our definition of a secure OS is Windows (what other OS needs to have "current anti-virus protection"). That sure explains a lot. I suppose those machines wouldn't happen to have the firewall enabled, would they ?
Layering is always good.. (Score:2)
Never let oneself get tricked into thinking that one big layer of defense will keep them out. The French frogs built the Maginot Line and look where it got them.
The best defense is not showing the world that you have the systems in the first place. Hostmasking, IP shrouding, wrecking the IP tables to the point where a hacker only winds up either getting rerouted or dev/null.
The 2nd layer is securing the LAN. Standard firewalls on ev
What has you chained to your firewall? (Score:5, Insightful)
At my last company, we didn't have a firewall on the website, because my philosophy was "I'm running port scanning to make sure 22, 80 and 443 are the only ports listening on the boxes - why should I put a firewall in front of it to only let those ports through?"
Unfortunately, now, if you don't have a firewall, you're not in compliance. It's simply a cost of doing business - the security concerns are completely irellevent.
Obviously, you should be building your networks so they would work without firewalls - that's a lot more secure. But, unfortunately, you can't just throw the firewalls out even if you don't need them.
Re:What has you chained to your firewall? (Score:3, Interesting)
But you do need them. You should assume that your servers will get rooted, in which case they may soon be listening on any other ports and initiating connections to anywhere also on any port, or even DoS'ing the rest of you
So what's the big deal (Score:2)
Uh-huh... (Score:2)
Not real bright... (Score:2)
Seems somewhat questionable.. (Score:2)
Maintaining a 100% secure client OS and specialized applications aside, if a user were to download a malicious program or visit a malicious page with a new IE exploit, couldn't his authentic
Not an Innovator; Just a Contrarian (Score:3, Interesting)
He proposes that we secure all individual boxes, which is umpteen times more difficult, more time-consuming, and less secure.
He's not an innovator; he's a contrarian.
Re:Right now... (Score:2)
The only problem i have with getting rid of the firewall is that the conection from the script kiddie os still being serviced by the server. If a firewal is present i have eliminated that conection form the server. One script kiddy in't a real concern but hundereds of zombie machine trying to conect can basicaly cause a denial of service attack for that server. (or at least slow it down.) Right now my firewall only lets certain ip's thru
Re:Right now... (Score:2)
This mindset doesn't work in a real, large, datacentre, which is the kind of environment this guy is talking about.
If your ssh traffic is business critical, then shutting it off to an IP is not an option just because some script kiddy is mucking with you. You can't let him degrade your business
Re:paranoid (Score:2)
You obviously didn't even CONCEIVE of reading the F'ing article, right?
Good
Re:no firewall? (Score:3, Informative)
What part of TFA didn't you read? This one?
"The first tier consists of presentation servers such as Web and e-mail servers--these are the only servers accessible to end users."
What part of "presentation" didn't you understand? The clients access their apps via these servers. Everything else is in a (two tier)protected server ring accessible only from the presentation servers themselves. Thus, clients do NOT need to access the critical application, and especially the database (where the corporate data hac