When "Security Through Obscurity" Isn't So Bad 152
Erik writes "In this article, Jay Beale (Bastille Linux Project, Mandrakesoft) explains why Security Through Obscurity is actually a really good idea if you do it right. A good read for sysadmins." Agreed... a lot of really interesting points well written and entertaining. For starters you can't rely just on obscurity to keep you safe. But it doesn't hurt to obscure things sometimes just to make it tougher for your attacker.
Ain't bad but... (Score:1)
Missing the big point (Score:2)
Examples of this abound in cryptography. One example is the Skipjack algorithm. It was developed in secret by the NSA, probable the premier collection of cryptographers in the world. It was released in tamper-resistant hardware boxes, so as to preserve the maechanism's security. However, it was quickly reverse-engineered and found to have huge weaknesses.
The same goes for any other computer system. If you use software (mechanisms) that were developed in secret, their flaws will remain hidden from the creators, and go unfixed. Someone will figure them out, and those bugs will be exploited. It is preferable to use software that is subject to the harsh light of public scrutiny, so that its bugs will be found and fixed. This applies very well for cryptography, which changes rarely, and less well for things that change more frequently. Reasonable people may differe about what change rate makes it perferable to have secret mechanisms, but it's not unreasonable to say that it's better to have your operating system and server software available for general scrutiny. Folks like Mr. Bartle and MeowMeow might disagree.
But the point is this: Obscuring your mechanisms is bad, obscuring your data and use of the mechanisms is good. Mr. Bartle rightly points out, most security these days rests with the obscurity of passwords. He does not, however, give a general framework of what to obscure and what to reveal.
Basic difference: (Score:3)
For example, if everyone's password was their spouse's or pet's name, that would be a kind of security through obscurity. This is obscure information to an outsider, but is vulnerable to someone who does their research. Whereas a true-random password would need to be stolen or guessed by brute force.
There isn't always a sharp line between the two, but it is a useful distinction.
Things like encryption algorithm used are almost always merely obscure (the general <i>method</i> of security is always obscure, but sometimes an exact algorithm is truly secret), while passwords are ideally secret, but often merely obscure in practice.
Re:I have another concept that's better (Score:1)
--
All security is based on obscurity.. (Score:5)
When does something stop being 'security through obscurity'? Depending on how you look at it, all forms of security, (or at least most of the ones employed over the internet) are based on picking access tokens that are really hard to guess.
Running a service with no authentication on a random port isn't great security, but in principle, it's the same kind of security as running on a well known port and requiring a unique access identifier and passcode. It's just harder to guess, but still fundamentally the same.
Real security would be achieved through schemes where all of the knowledge in the world won't gain you privileged access.
The fundamental difference: (Score:5)
The answer is simple: ease of review. Obscurity is meant to put stumbling blocks in the path of those who desire to review the system, for whatever motive - be it academic curiosity, security assurance, or even to learn how to penetrate into it. The hidden web server trick described in the article, closed-source security software, and proprietary crypto are all examples of techniques that are meant to obscure and thus make review difficult.
The question of whether to obscure, then, reduces to whether you'd like the system you're building to be reviewed. There are several very bad reasons that could motivate you to hinder review of your system: attempting to hide security flaws you either know or suspect to be found therein is one of the bigger ones.
But the decision to impede review can be perfectly reasonable - depending on who the reviewers are likely to be. If you know, for instance, that your community of reviewers includes honest, skilled people who want to use your product and who will alert you to problems that they find, then that's a very big reason not to obscure anything. This is what motivates Linus, the Apache group, the GnuPG folks, and everyone else out there tirelessly trying to produce systems that function in the most security-hostile of environments. These folks have literally thousands to millions of users, almost all of whom are honest, and many of whom are skilled enough to discover flaws in the system.
Joe Sysadmin doesn't have that kind of community. His users are very likely incapable of discovering security flaws, or if they are, unlikely to share the information they find with Joe. The majority of people who might be interested in reviewing Joe's network are malicious and intent upon using any information they find to the detriment of Joe and his users.
In this case, the decision to put up walls of obscurity is as much of a no-brainer as the decision to use an open-source web server. Joe has assessed his community of potential reviewers and has determined that, on the whole, he'd rather not have that set of people learning things about his network. He will certainly use products that have proven themselves under strict review, but he is under no obligation to describe to anyone how he's configured his network. In his situation, doing so would only undermine his security.
Re:Let's face it, CmdrTaco, (Score:2)
1. DES and AES are symmetric key systems, not public key.
2. When people say that breaking cipher X is exponentially difficult, its really shorthand for "there is no known algorithmic attack against the cipher that is significantly faster than checking all the permutations of keys by brute force". It may be that brute force is optimal but unless you've actually done a proof, this is only a conjecture. Look at the improvements to DES that the NSA suggested to IBM- these changes fortified DES against cryptoanalysis that wouldn't be public knowledge for almost another 2 decades- yet most people had no basis for making this judgement- just as we can only guess that AES is secure against all possible future attacks. This doesn't mean that the algorithm doesn't matter- clearly DES is stronger than rot-13, but you shouldn't assume that an algorithm alone brings security
Re:OS advocates always forget (Score:2)
Besides, unless a program really hasn't been looked at before or the bug is of a newly-popular class (e.g., format string vulnerabilities), someone has probably already found all of the obvious bugs. The more likely case is really someone testing a program with a lot of different weird situations, which is just as easy with a binary.
Re:Stripping email headers = bad (Score:2)
How do you tell which internal machine spouted the mail?
That is what my email logs are for. All the interesting header bits are removed by the final outgoing mail server after it has been logged. Pretty simple.
Re:Lack of Maintainability through Obscurity (Score:2)
Within certain contexts that's true. But in some cases, obscurity doesn't sacrifice maintainability much at all. The example used in the article was changing the port number a certain intranet webserver is run on. This type of obscurity is fairly benign.
My view is that obscurity should be the last layer applied to any security system. And as you point out, if the cost of obscurity is too high in terms of maintainability, maybe it's not such a good idea. A system ought to be secure without any obscurity, but having a "sane obscurity layer" sours the pot for the kiddies.
Jason.
Heh. (Score:2)
The most important point of the article is that security through obscurity is terrible if and only if it's used as the only layer of security in the system. As another layer, it can't hurt at all.
I think it's like wearing a bullet proof vest that isn't rated for large caliber weapons. Sure, it's not going to help you out if some loser with a
...and you seem to forget (Score:2)
We all use it, anyway (Score:1)
It will be interesting to see what kind of authorization will exist if mind-probing would ever be an easy-to-use and reliable technique (if possible at all).
Oh, and probably a zillion other times where you've done something (not necessarily computing related) and thought it was safe just because the odds people would find out where neglectible.
Re:Let's face it, CmdrTaco, (Score:1)
Neat! So qmail doesn't use libc? Nice hack. Too bad you're relying on one guy's coding skills to keep it all secure.
So you'd best secure it, buddy. This one is so astoundingly dense I don't know where to begin.
(Insert your poorly-reasoned, hate-filled "I'm better than all of you!" response here)
Stripping email headers = bad (Score:2)
As another poster said, any attempt at obscurity (especially in this case) results in an exponentially greater burden of maintenance.
No he doesn't (Score:4)
Your claim (which contradicts the entire point of the article, which is that a little obscurity on top of a secure-as-possible system can't hurt) doesn't circumvent his argument whatsoever, and I really don't see why there is such an aversion to the idea of obscurity on here.
His point, which is very correct, is that (using the website as the example) someone has to be very "noisy" to actively seek out non-standard ports/services: If there is a hacker targeting your system (s)he will leave a lot more fingerprints and evidence of their actions doing a cascade scan through your servers/ports than if they simple waltzed in and connected to private.company.com. The point is that each additional piece of hidden information is one more hurdle for the prospective hacker to have to jump over: It's a shitload harder to hack an unknown webserver on an unknown machine through an unknown firewall than it is to hack IIS 5.0 on Windows 2000 SP 1 pre-HOTFIX XYZ running on port 80 at intranet.company.com (hint: In the first case there will be a swath of evidence attempting various tactics to determine what the OS/webserver/firewall restraints are. In the second Jimmy pulls up black hat exploit #1027-D [the one that the public doesn't know about yet] and applies it against the server. Instantly the server is ownzed and there are zero tracks because to the other systems everything looks fine. Jimmy puts in his keystroke grabber, cleans up his tracks, and disappears into the night).
The concept of enhanced security through obscurity is absolutely as clear as day. Pretend that I encrypt a piece of information to send to you: Now of course I'm going to pick a very secure algorithm so let's say that I go with Twofish. Now to the best of my knowledge it is a super secure algorithm and I'm safe and there is zero chance that it could ever be broken, but pretend that somewhere someone out there knows how to break Twofish with just a month of computer time: Do you think that they'd waste the month if they were unsure what algorithm you used? Pretend that they see an encrypted file called stuff.enc, versus bank_numbers.twofish: Which one gives them a headstart and motivates them? The standard (moronic) reply to this is "Well use a secure algorithm/software/OS/etc.!" however that is a foolish statement: Many algorithms/software/OS' have fallen in the past after years of people super-duper-assuring you that there is absolutely nothing wrong. So in other words unless you are absolutely sure yourself about every piece of software and algorithm that you use, a little obscurity can't hurt.
Security derives from biometrics. (Score:3)
Try running a different kind of Turing test. Not one where you're trying to prove how intelligent or self-aware or witty or urbane you are but just who you really are.
That test immediately requires a web of trust, that someone or something we can trust be able to vouchsafe for you, and a web of deceit, that that someone or something we can trust be able to recognize you somehow in such a way that we can all trust the process.
The current authentication schemes usually fail by having a web of deceit that's too broadly woven. The senses we have provided to our systems are ridiculously inadequate. For now.
Lets create a system which can authenticate that you are you. It has to know who you are by virtue of your having been presented to it once under trustworthy circumstances.
What you know is useless.
It should be able to authenticate your cadaver. So much for all the password schemes in the world. Period.
It should be able to identify you AS a cadaver. That means that the bio-metric data must include measurements of things like temperature, heart-beat rate, eye movement, involuntary tremors and other things which correlate to identify you as you.
Listening to a "Rich Little"-caliber mimic on one of his good days will fool the blind but the disguise is blown the moment you open your eyes. Therefore the bio-metric data must be multi-dimensional.
Listening to you say a common phrase as you stand in front of it (actually you'll be potentially surrounded by its sense organs,) it should be able to identify you from anyone else on the planet and tell not only if you're you, but if you're angry, in distress or just inebriated.
And if it doesn't recognize you, you can go suck an egg or spend a night waiting for your attorney.
Until then security is mere mental masturbation.
Re:This is ancient... (Score:1)
Actually, those aren't examples of obscurity but are actually real security - they're just like having a password (a possibly-easy-to-guess one ("speak, friend, and enter"), but still a password). In both cases the schematics of the door mechanisms, locks, etc. could have been laid before a thief, but they still couldn't have gained entrance without either a brute force attack or else knowing the shared secret - the password (I'm assuming here that neither type of door really had any mechanical defects, etc.)
Security through obscurity was more like Smaug the Dragon - if anyone could see the schematic of all of his armor, they would have immediately identified the weak point.
Re:I'm talking about rootkits, not exploits (Score:1)
Of course, if you just mess around and after a breakin assume that patching the backdoor is enough to clean your system... hmm, then I guess you're asking for problems.
A bunch of rootkit backdoors don't even use existing binaries, they just run their own login service, hidden by a special kernel module.
Other tip besides tripwire: www.snort.org
Re:/. is the wrong audience for this article... (Score:1)
Security through obscurity is quickly a victim of security mistakes (plus the _false_ sense of security is has given the implementer).
Re:Bruce Schneir (sp) agrees (Score:1)
Longer to find it, often not: You know how they would find the safe? They simply follow you when you go out to check on it. Bye Bye security, it helps nothing.
Often obscurity is not what it seems, the obscurer will think it's obscure, but the safecracker will often quickly find a way to figure it out.
Re:This is an interesting topic (Score:1)
Re:The use of obscurity is environment dependant (Score:1)
Obscurity vs Secrecy (Score:2)
Re:reply to .sig (off topic) (Score:1)
reply to .sig (off topic) (Score:2)
http://www.eff.org/Misc/Graphics/nsa_1984.gif
There's no mention of copyright. Perhaps contacting EFF and see if they have it and will let you use it? At the very least, I suspect they could set you up with someone. At that point, you could print a run of them youself.
Re:I have another concept that's better (Score:1)
----------------------------------------------
Re:OS advocates always forget (Score:1)
That's why lack of source hurts the defense more than the offence. The offence just needs to find something that causes the target to misbehave and trace it at the machine-state level. Should be pretty good pickings, especially if due to some subtle compiler or library error.
The author misses a VERY big point... (Score:1)
Using obsurity as a security method does hurt your security, because it provides a false sense of security..
In his example (hiding the port your IIS server is running on) - it may protect you from a script kiddie. By hiding the port number, you might think "oh, it's unlikely that someone will find this." - which is a very bad thing, because if it's there, someone will.
Obscurity is ALWAYS a bad thing, because it leads to a false sense of security.
One other nitpick is that keeping a password secret isn't obscurity - it's a method of authentication. (I am me because I know my password.) Obscurity is hiding the existence of something.
An idea for slowing down port scanners. (Score:2)
know that to function they typically make a connection to every port they want to probe and see if they can complete a connection. Those scanners that are trying to be stealthy might not complete the connection after this point, but others might continue to at least recieve data about what server is running on that port. And this gives me an idea.
Set up a LOT of servers on random unused ports on every system that will answer any incoming connections and print out a LOT of data VERY VERY slowly, such that it would send one character at a time and send each packet one byte at a time with lots of delay time in between. Make it short enough so the port scanner doesn't time out and give up, but will sit there and happily lap up the characters as they come through one at a time over a period of hours. This way, if a non-threaded portscanner were to stumble onto one of these machines it would essentially take that port scanner out of operation until the operator discovered the problem. Granted, this trick could be overcome with software on the portscanner side, but it might make the attacks a lot less fruitful for a while.
-Restil
I don't get it (Score:2)
Huh? Ok, maybe for an ultra-secure private site where you can tip everyone off who needs to know the "secret" port address to append to the url, but how does that help me? If I want people to get to my site, it had better be running on port 80.
The underlying message seems to be that one you have changed the port your webserver listens on, you should install (portscan detectors, including Bastille Linux developer Mike Rash's upcoming Port Scan Attack Detector,) THEIR new portscan detector.
Wait, is this an ad?
Re:Why admit you're using IIS at all? (Score:1)
----
Re:Lack of Maintainability through Obscurity (Score:1)
Re:Let's face it, CmdrTaco, (Score:2)
both sendmail and qmail have their security limitations imposed by what they are designed to do...
qmail has had no security holes in that it works as advertised... if I set up qmail to allow relaying from all ip addresses, that is a form of security by obscurity... it's "secure" as long as nobody tries my ip address, but otherwise the door is wide open.
If someone does find my open relay, it's not a fault of qmail, and thus (rightly!) is not considered a security hole in qmail.. it's my fault.
In order to increase security then, I have to reduce functionality... I cannot access my smtp server from any ip address... it must be validated in some way, either by using SMTP auth, or by limiting the ip's which may access the server.
But then someone could spoof the ip address of an ip I've allowed relaying from, or they could discover a password from SMTP auth...
Either way it's not a security hole in qmail, but a limitation in security imposed by the fact that qmail is supposed to be able to do something...
The only way to 100% secure qmail is not to run it at all! (of course I'm not picking on qmail, but on all programs in general).
There is a difference between security, i.e. deciding what tradeoff of access and functionality is acceptable, and security holes, software bugs which cause programs not to operate as designed.
You can plan security to reduce the impact from possible security holes, and you can code well to reduce the occurrance of security holes, but both of these are just part of an overall security policy.
Doug
Re:Let's face it, CmdrTaco, (Score:1)
OK -- one not connected to the Internet.
The parent post wasn't talking about computer systems, it was talking about crypto systems. And there are well known perfect algorithms -- like one-time pads -- and well known exponentially difficult algorigthms -- the proven public-key algorithms. (e.g. DES, AES)
> Would you give all potential attackers a complete list of your computers, all the software they run, and schematics for your internal network? Would you send them your ruleset on your firewall? Of course not! And keeping this information obscure is security through obscurity.
Why not? If you're running OpenBSD, what do you have to fear?
If you _depend on_ your firewall rulesets to remain secret for security, you have no security. Anyone with enough patience will crack your ruleset, if it's vulnerable. The article points out that obscurity only enchances security when your system can be shown secure without it -- that your firewall rulesets are correct.
-_Quinn
Re:I'm talking about rootkits, not exploits (Score:2)
------
Re:Summary of Article (Score:1)
Suppose I have a system protected by a single password prompt, which uses a 40-bit key on some unusual crypto alg.
If I tell everyone the algorithm name and keysize, it's only a matter of time before it's cracked. However, any attacker won't know you have to type "All your base are belong to us" at 3 letters/second, send a form feed, and append 'brouhaha' to your password as well as getting the encryption right. The number of hackers who have a go at your system will be drastically cut down by the obscurity in place.
Re:Summary of Article (Score:1)
(in the field of security, of course, not English in general)
Re:An idea for slowing down port scanners. (Score:1)
Re:An idea for slowing down port scanners. (Score:2)
At which point, they get some of their 0wned boxes to connect to your servers, hence wasting *your* cycles.
In the meantime, they have figured out which ports on your machine do *not* behave the same way, and hmm, you're 0wned too...
Abraham Lincoln's statement to obscurity (Score:1)
40 bit = strong encryption? (Score:1)
Umm, when did 40 bit become strong encryption? Must have been listenin' to Freeh's voice boming out from the black helicopters for too long. Given the credentials he totes around on this stuff, this has got to be a mistake.
Re:40 bit = strong encryption? (Score:1)
Re:Obscurity isn't bad, just a waste of time. (Score:2)
no kiddin... (Score:3)
Re:Lack of Maintainability through Obscurity (Score:3)
That's what I like to call "job security."
Kind of like not commenting code. Make it impossible to interpret by anyone but yourself and you're simply increasing your value as an employee!
Re:OS advocates always forget (Score:2)
Re:OS advocates always forget (Score:3)
I would argue that this is not the case in the computer world.
o A defender has many bases to protect; often with few resources.
o The human resources needed to build an effective security team are incredibly expensive.
o If an attacker compromises even one 'base', hiding tracks and stepping to other resources is often easy.
o Even detecting a compromise can be extremely difficult, let alone determining what damage has been done, etc.
o Attackers may attack at leisure and need only find ONE of many possible vulnerabilities.
o Attackers can also attack with very little cost or consequence by bouncing through different proxies.
o Defenders must obey the law, while attackers may not.
o In the real world, defenders can launch offensive measures as a defense; you can't really do this in the computer world.
o Attackers often find out about vulnerabilties in the defender's system before the defender knows about them
I don't think that defenders have any advantage; attackers almost certainly have the upper hand. In the real world, the following factors make defense positions stronger in conventional warfare:
o Defense can see that attack comming and anticipate what the attacker will do
o Attacker's goals are often obvious
o Attacker is out in the open, while defense has had time to build heavy physical defenses and stockpile resources
o Attacker is unlikely to find a gaping hole in the defender's defensive measures
o Attack is resource consuming
-1, nonsense (Score:1)
Nonsense, it just takes one step out of the process. Have you read eEye's Analysis of Code Red [eeye.com]? He didn't have the source, but he accurately predicted what it did and what would happen.
And look, IIS is closed sores, and they still managed to get into a bundle of crap.
Linus said: Given enough eyeballs, any bug is shallow.
Hiding the source just cuts down the number of eyeballs.
Re:This is an interesting topic (Score:1)
From a signature on Buqtraq: Security isn't a program, it's a process.
OpenSSH didn't suffer from the same short password problem commercial SSH Secure Shell did. They have different licenses yes, but those aren't the only differences. :-)
Re:This is an interesting topic (Score:2)
As a matter of fact, that's exactly what it is designed to and will do. The whole point of PGP encryption is that it obviates the need for password authentication. The only password (in effect) is your private key. This private key should be kept safe at all times, but if it does not fall into the hands of your enemy, they will not be able to read your messages.
The only place a password comes into play is the protection of your private key, but in all honesty you're already screwed if your private key has fallen into enemy hands. A short password is not going to stop any determined party from cracking your private key once they have access to it. For the better part of a decade, I've not used a password for my private key, because it's stored on removable media and only ever used on a trusted host. This is one misconception I'd like to set straight. PGP provides more security than could ever be provided for by a system that relies on a memorisable password.
Re:Obscurity isn't bad, just a waste of time. (Score:2)
Obscurity isn't bad, just a waste of time. (Score:2)
You'd better watch out... the FBI might get you for violations of the DMCA there...
Now back on-topic... obviously there's no excuse for obscurity as the sole method of security, at least not for any sysadmin worthy of the title... but I would argue that security-by-obscurity is kind of a waste of time. Ideally, I'd want all my webservers, etc. to be just as impregnable at port 80 as on port 8000, so why bother hiding it?
Zaphod B
One more obvious thing (Score:4)
The reason security through obscurity doesn't work is it assumes a lesser effort provides a deterrent after a greater effort has already been spent. If the attacker can beat Kerberos authentication (or even a password challenge) the port scan becomes a trivial effort.
The one advantage of the article's example is not security related at all. Filtering script kiddies saves a potential bandwidth hit by quietly dropping port 80 traffic.
Re:This is an interesting topic (Score:2)
OTOH there is no point in the world in using this information across the net, because it becomes just like any other hard-to-guess authentication token and can be sniffed, copied, and sent round the place... plus, once somebody's cracked it, you can't change your fingerprint/retina/DNA... which means that any network using these measurements as your identification is either a) based on a quite amazing depth of trust/independent arbitration/separate cryptographic systems or b) flawed.
Not that I spend all day thinking about these things...
Re:OS advocates always forget (Score:5)
(Not the part about the comparative ease of exploiting security holes in source code vs. binary code -- the part about OS advocates somehow "always" forgetting that fact.)
In my experience, advocates of open source are highly aware that it's easier to exploit security holes in source code.
After all, to do that, one must find them first, correct?
And unless most or all of the people looking for them also intend to exploit them in an offensive way, the result will be that said holes will be fixed in a defensive mode vs. offensive attack faster than for proprietary, binary code.
Remember, defense always has an advantage over offense, in terms of time and effort -- the "battle" is on the defender's own turf. Offense can make up for this inherent disadvantage to some extent by, for instance, having the element of surprise.
For most such offensive counter-measures, having the source code is not an advantage, in fact it's a disadvantage, if the defense also has the source code.
That's because the defense can deploy fixes to holes faster than the offense can attack them, so, assuming they both find out about them at the same time, the defense wins. (This is generalizing, by the way. Specifics always vary, of course.)
Now, ask yourself this question: when it comes to binary-only code, who is likely to know more about the internal operations of the code -- the defense, in this case, law-abiding users, or the offense, willing to obtain THE SOURCE CODE through illegal channels??
That's why Open Source has such an advantage here -- the defense, collectively, is encouraged to share, generously, insights about the software, while the offense must remain fairly isolated (a general rule about law-abiders versus law-breakers; while law-breakers sometimes pool their resources, they have an uphill battle with regard to such activities compared to law-abiders, who can do their pooling in the open and/or in hiding).
When it comes to proprietary software, law-abiders are increasingly discouraged from sharing insights, info, breakages, fixes, etc., while the opportunities for pooling resources on the law-breakers' side remains pretty much the same.
But there's a special prize law-abiders are denied that law-breakers have the opportunity to exploit should they gain access to it, when it comes to proprietary software: the source code.
Once they have that -- and they will, if the software "matters" at all -- the law-abiders will be on the losing end of things.
"If source code is outlawed, only outlaws will have source code."
Re:All security is based on obscurity.. (Score:4)
Thanks for qualifying this as security on the Internet. Note that there are a whole range of other forms of security. For instance a guard holding an M16 is a particularly effective form of security that has no secret component at all. An entirely different method is the "security camera". And a third is Mutually Assured Destruction.
None of these three methods of security are prevalent on the Internet. Are there any others? Could any of these be used in place of traditional usernames and passwords?
--
This was covered before... (Score:3)
In this [slashdot.org] slashdot article by a slashdot reader.
The reader, being a Microsoft employeee, received a mostly negative response, whereas responses to this article are getting mostly positive reviews.
Funny how much acceptance of the message is affected by who the messenger happens to be.
Summary of Article (Score:2)
When you say "Security through obscurity is bad," what you really mean is "Security implemented solely through obscurity is bad."
Er, thanks for clearing that up, guys.
Dlugar
Real security by obscurity (Score:3)
login: root
password:
> ls
Tue Jul 24 02:06:22 CEST 2001
> rm -n
.
> rm
rm:
> rm /
foo bar baz foobar
> exit
exit: Too few arguments
> quit
quit: Command not found
> ^D
> ^C
> ^]
<telnet> quit
Re:bah! (Score:2)
For a classic example, consider the Enigma. Security through obscurity, in that no-one knew how the rotors were wired, or what the codes were. Had a U-boat commander not omitted to destroy his codebook, the U-boats would have owned the Atlantic for the whole war, Britain would have been screwed, and the US would never have bothered joining in a war where they'd've lost most of their troop ships on the way across the Atlantic. Even with that and with super-geniuses like Turing on the case, if the Germans had changed the codes more frequently or changed the rotor wiring once a year, we'd still have been screwed simply through the time it took to reverse-engineer the wiring.
Security _only_ through obscurity sucks when it isn't changed regularly enough. Security through obscurity is essentially communicating with a shared secret, the secret being the obscureness factor. Any codebreaker would tell you that the more signals you get with the same key, the easier it is to break those signals. So if you keep the obscurity and change the obscurity at regular intervals, no-one has enough time to reverse-engineer the old one before the new one is in place. This admittedly isn't great for stuff that's supposed to be long-term secret (eg. government stuff or bank details) but it's fine for something where the data rapidly becomes out of date (breaking news reports, for instance).
Of course, if it's sending something like a credit card number then you want _absolute_ security, so that someone 2 years later can't automatically break into your emails. And then I will agree with you, obscurity sucks the big one.
Grab.
Re:All security is based on obscurity.. (Score:2)
Let's assume for the moment in your above example that the only way into the system is through the front door of the one server in question. Then in the first example, an attacker just needs to scan a few thousand ports at most, a pretty quick matter. In the second case, if the username/passcode is chosen well (randomly), and hitting the server is the only way an attacker can check the validity of the username/password, then he must make (2^^N)/2 tries on average, where N is the number of bits in the username/passcode. Make N even a little bit big, and throttle how fast attempts can be made, and you can make the expected time to break in come sometime after our sun goes red giant. That is fundamentally different.
Re:OS advocates always forget (Score:2)
Actually the entire field of guerilla warfare is predicated upon the complete opposite of your statement. Offense has the advantage over defense because they can choose the terms, time and location of engagement.
Or to quote Warren Buffet instead of Chairman Mao, investing is like a match of cricket, rather than baseball: there is no penalty for not swinging the bat.
Lack of Maintainability through Obscurity (Score:5)
The use of obscurity is environment dependant (Score:2)
Rknpgyl. Frphevgl vf nobhg nccebcevngr cebgrpgvba. Nf na rknzcyr, lbh jvyy unir unq gb obgu qrgrezvar gur xrl sbe gur fvzcyr Pnrfne Fuvsg Pvcure V'ir hfrq gb rapbqr guvf nf EBG-13 naq gura npghnyyl tb gb gur gebhoyr bs qrpbqvat vg (juvpu znl be znl abg or rnfl sbe lbh, qrcraqvat hcba gur gbbyf lbh npghnyyl unir ninvynoyr gb lbh).
Naq V'z jvyyvat gb org gung lbh'er cerggl hahfhny: zbfg crbcyr jvyy unir fxvccrq cnfg guvf gb ernq gur arkg cbfg va Ratyvfu, orpnhfr gurl pna'g or obgurerq... fb hayrff guvf cbfg trgf zbqqrq hc V'ir rssrpgviryl ceriragrq zbfg crbcyr sebz nethvat zl cbvag. DRQ. ;)
Re:Obscurity isn't bad, just a waste of time. (Score:5)
This isn't about making your targets impossible to crack. It's about making harder to crack than the guy next to you.
The slowest animal in the herd principal.
Lets say me and you both run service A, when a remote exploit is discovered. Bob the happy script kiddy gets his scanner and starts looking for said service on it's default port. On your box he finds it fine, and cracks you. On mine he doesn't see it initially. So he skips me and moves on. Say bob _really_ wanted me, he would scan all my ports and find that service, but in the meantime I see a bunch of traffic searching my network in odd places.
The slowest animal in the heard is already down, but in the few seconds I gained I realize some clever escape, and viola, I'm free. Obscurity doesn't make you uncrackable, but it gives you an edge.
Headline: Expert demonstrates ignorance (Score:4)
It seems to me that most people pass through three stages of understanding:
Now, why is using obscurity bad? I'll give several reasons.
Re:Obscurity isn't bad, just a waste of time. (Score:2)
Because its very hard to reach that ideal, thats why you hide it. As said in the article, it makes it harder for the script kiddies to find. Now for the non script kiddies...
Re:I have another concept that's better (Score:2)
Well, your fingers weave quick minarets; Speak in secret alphabets;
I have another concept that's better (Score:5)
Well, your fingers weave quick minarets; Speak in secret alphabets;
Re:Security derives from biometrics. (Score:2)
How about we just go with Security Through IP? (Score:5)
If you log in and are not authorized by Adobe, you will be in violation of DMCA statutes and join Dmitri Sklyarov in jail.
Have a nice day.
Login:
-- .sig are belong to us!
All your
Bruce Schneir (sp) agrees (Score:3)
He gave the example of:
If you have a letter you don't want anybody to read, and you put it into a safe, and hide it in New York, that's not security; that obscurity. You're hoping that they (your enemies) won't find the safe.
However, if you give crackers the safe and a diagram of the lock, and they still can't figure out a way to lockpick it, then that's security.
However, hiding the safe in New York in addition to securing it doesn't hurt at all. It just takes them longer to find it.
/. is the wrong audience for this article... (Score:3)
Particularly entertaining in this article was the following: Are there really people out there who would choose not to obscure something like the name of a server because they thought it would harm their security ? You can't be serious...
Like I said.
--CTH
--
OS advocates always forget (Score:5)
Trolls throughout history:
I'm talking about rootkits, not exploits (Score:5)
So a big apache expliot comes out, and a half hour later there's a patch (thanks to open source) and you apply it/ recompile. Then you look at the apache log files and don't see any unusual activity. So you're safe, right? wrong. Your system was compromised and a rootkit was installed. It cleaned up all the logs. It added a backdoor to getty. It modified your MD5 checksum verification. It modified your rpm so that it points to the hackers server, no matter what you say. It modified gcc to include a backdoor into any program that requires authenitcation and insert this code into any gcc recompiles.
Do I really need to prove that it's easier to change:
if (checkPassword(password)) {goCrazy();}
to
if (checkPassword(password) || !strcmp(password, "k00ldud3") {goCrazy();}
than it is to use a disassembler on an executable with no symbols to figure out what the hell is going on and insert a back door? Not only does this require a much higher level of expertise, it also requires significantly more time for the person who can do it.
Trolls throughout history:
Re:Let's face it, CmdrTaco, (Score:2)
Show me a perfect system that cannot be broken into.
The real secure systems can let people know all that they want about them, regardless of their intentions. It really won't matter in the end, from a security standpoint, because well-designed security frameworks are undefeatable through principle, not through secrecy. To say otherwise would be just like Micro$oft denying that a security hole exists while they drag their feet in releasing a patch for said vulnerability.
I think you miss the point. There is no such thing as 100% secure. That is a fact and you have to get used to it. A bug might get discovered tomorrow that might cause your system to become vulnerable. Or, a previously unknown unchecked buffer could be found by crackers before it is found by security personnel. You have to make these assumptions if you want any real security at all.
By limiting the ammount of available information, we can force the attacker to expend more time (and make more entries into our logs) in their attack. This can discourage many script kiddies who do not know enough to design their own exploits and it can also force a knowledgable attacker to (hopefully) expend a lot more effort on a (hopefully) unseccessful attack.
Would you give all potential attackers a complete list of your computers, all the software they run, and schematics for your internal network? Would you send them your ruleset on your firewall? Of course not! And keeping this information obscure is security through obscurity.
As the O'Reilly book "Building Internet Firewalls" points out, though, running services on non-standard ports provides little obscurity and really inconveniences your users, but preventing important information from leaking out to the internet is what obscurity is all about.
Sig: Tell all your friends NOT to download the Advanced Ebook Processor:
Re:Obscurity isn't bad, just a waste of time. (Score:2)
I still would not use that as my example of security through obscurity. It provides little obscurity because the information is still available. I agree that it might be helpful in some limited instances, but there are better examples (like MTA trimming, DNS obscurity, etc) which are more generally useful.
Once one guy found your service, he could tell others. That way, once you saw a portscan, you would have to change ports AND let all your users know... Again, not the best strategy.
Sig: Tell all your friends NOT to download the Advanced Ebook Processor:
Re:Let's face it, CmdrTaco, (Score:2)
Obscurity is just a way of shifting the balance to give you more functionality at the same security level, or more often, more security at the same functionality level.
Sig: Tell all your friends NOT to download the Advanced Ebook Processor:
Re:Let's face it, CmdrTaco, (Score:2)
my firewalls have generally broken active-mode ftp and authoritative DNS lookups. Of course stateful inspecton would help somewhat but my point is that these are disabled because allowing them in with a rule-based setup would allow too much potential for security compromise.
So yes, I have tested this. I could use netfilter to add usability but it is plenty usable and I am not sure whether I would get additional security. The only completely secure system is unplugged, in a safe, without network connection. Even then, someone could break in...
Sig: Tell all your friends NOT to download the Advanced Ebook Processor:
Pretty standard, really ;) (Score:5)
Furthermore, running services on non-standard ports provides no real obscurity but it makes administration substanitally harder. In this way, it is in no way good policy.
Good obscurity tactics involve hiding internal systems from your external DNS servers, trying to avoid letting people see what OS you are using, etc. Forcing people to do guesswork is good, but it should not inconvenience your users (that should be left for the more fundamental security measures.
Attackers should be forced to make their attack based on limited information. By limiting the information they have, you can limit the effectiveness of the attacks and maximize their effort. But it should not be used as a standalone strategy.
Sig: Tell all your friends NOT to download the Advanced Ebook Processor:
is this even security through obscurity? (Score:3)
Security through obscurity is usually discussed in terms of hiding encryption algorithms and security protocols, which is a totally separate issue. Read this article [wideopen.com] by Simson Garfinkel on the subject, for instance.
So to me the article seemed like a giant non sequitur.
Tim
Re:I have another concept that's better (Score:3)
Since you can't be 100% secure, security is all about being a very small target. Running a BeOS or Commodore 64 web server would definitely do that for you.
I took a Verisign course once... (Score:3)
The bulk of the time in class, however, was spent discussing how to arrange things to make everything else obscure as possible.
We talked about things like firewall masking, where firewall software automatically stripped out the software version indentification strings out of ping and logon responses. Stuff like that.
The ethos the Verisgn teacher had was that there was only so much you could do to protect a 'celbrity' system, even if you went so far as to impliment a client-certificate setup. The best way to secure a system was to make it difficult to find as well as crack.
Two kinds of "Security through obscurity" (Score:5)
First there is the bad kind: you're a software vendor and you hope that nobody will notice that your "secure" software uses ROT-13 instead of RSA. You are protected by the DMCA anyway (remember: crack ROT-13, go to jail).
Then there is the second, good kind: if I'm an admin, there is not reason I shouldn't strip outgoing email of any headers that my reveal the structure of my internat network (some people ARE arguing that you shouldn't do that.)
I don't really like this "security through obscurity" thingie. Let's make the difference between full specs release and not telling anyone more than they should know. This is the way to built a really secure network.
Re:Some History (Score:2)
No, the security of the password file has almost nothing to do with the security of the password algorithm. I am not aware of anyone breaking the DES encryption algorithm to break passwords.
The vulnerability comes from the ability to obtain passwords by brute force examination of the password space - a dictionary attack. Using AES would not make a difference because the issue is the number of possible passwords. And adding a random letter or number in the middle of the password 'pass3word' does not change the search difficulty as much as the naive sysops think.
In fact the stupid constraints (case sensitive, must have symbol) used to patch up weak password infrastructres should really count as security through obfustication at this point. The added security is measurable but woefully insufficient and really inconvenient for the user - why can't the password alg check for both positionws of CapsLock?
There are much better security measures than shaddow passwords, however at this stage relying on the encryption alone is nowhere near sufficient.
Some History (Score:3)
Security Through Obscurity was one of the slogans thrown back at the complacent computer vendors and sysadmins. As with any slogan though the medium does not allow for a sophisticated or subtle message.
About ten years ago the UNIX comunity started to wake up to security issues. Previously the UNIX approach had been pretty rudimentary, most UNIX machines were departmental or personal workstations and security was a nice to have but relatively few people depended on it actually working.
Then the Internet grew beyond academia and suddenly security started to matter an awful lot.
At the same time traditional UNIX security nostrums started to be re-examined, like the lack of shaddow passwords. At the time the idea of protecting a password file with a password was considered by most UNIX advocates to represent the evil of security through obscurity. The UNIX one way encrypted password file was held up as the exemplar, to want to protect it was to fall foul of the sin of 'security through obscurity' and obviously anyone like myself who advocate3d the practice had to be a fool. Today anyone who doesn't use shaddow passwords is considered foolish.
What it boils down to is that slogans have a tendency to turn into dogma. Then someone comes along and points out that the real world is not as simple as the slogan.
Re:The fundamental difference: (Score:2)
Which is fine, but it's also why safety-critical or high-availability software costs 10-100x what consumer software does. One of its features is that there's no in-the-wild beta testing, and no tolerance for released bugs. You've moved that validation into the development itself. (Everyone stop looking so amazed; software development by people who want to make money at it has always been about how many bugs you can ignore and still get market share. Microsoft's monopoly means it can profit from selling essentially non-running programs. Compare also the low-end, pretty-package, sounded-like-a-cool-product shitware you have ten of under your desk and don't see any value in returning because you're tired of standing on principle in the 30-minute line at Fry's.)
Shrouding the details of a network configuration implies that either you have adequately tested the network against boundary conditions, load, and pathological states, or you're willing to suffer failures and have your ass handed to you by your users who wish you'd just let them look at it because they know they would have seen this coming if they had.
--Blair
This is an interesting topic (Score:2)
The key (pun intended) is to remember that no single tool will provide you security. Obscurity is the most trivial example. PGP won't provide you security either if you have a crummy password. So, you have to remember all the factors that are inhibiting security.
We have seen examples in the past where closed source software has proven itself insecure. I remember the RNG weakness in Netscape. But even open source software can't totally save you. Open source is no more a cure-all then obscurity is an absolute forbidden. Designing good algorithms, educating your users not to circumvent "annoying" security measures, and the like remain integral to the problem of security.
Re:reply to .sig (off topic) (Score:2)
Re:Let's face it, CmdrTaco, (Score:2)
I dunno, no one has managed to claim the cash prize yet. Also, you aren't just relying on one man's coding skills, the programs are open source, and therefore many people can review the code for security.
So you'd best secure it, buddy. This one is so astoundingly dense I don't know where to begin.
Wrong again. The fact that there are millions of high bandwidth connections on much less secure systems (like all of those home broadband users who don't know that an Internet exists outside of email and the web) make it much less of a problem to me than if I was securing critical data. You are less likely to be carjacked if you park in a large parking lot.
Well, o.k. (Score:2)
Basically he's saying that it's o.k. to hide a your valuables as long as you lock the door too. True enough, but I'm not sure it required x pages of commentary to explain.
Our version of "Security through obscurity" involves keeping our internal network separate from our web server. People can hack our webserver all they want to -- it's not even in the same physical building or on the same network (physically or logically) as our internal LAN. Maybe they'll deface our website, but we can restore that quickly enough and it's mostly brochureware anyhow so no big whoop.
We're such a "low interest" target that most real hackers wouldn't waste their time with us. Nobody's going to boast about hacking our site. The script kiddies can't do any real damage -- as above the website is easily restored and of no real consequence to our daily operations.
Naturally we have the usual complement of firewalls, passwords, encryption and IDS to keep our important stuff safe.
Security through obscurity is fine; as long as you're not relying only upon the obscurity to provide your security. I just summed up his entire article in one sentence.
-Coach-
Just like Sexual Education. (Score:2)
Rudimentary Treatise on the Construction of Locks (Score:5)
A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and already know much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lockpicking long before locksmiths discussed it among themselves, as they have lately done. If a lock -- let it have been made in whatever country, or by whatever maker -- is not so inviolable as it has hitherto been deemed to be, surely it is in the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged, that an acquintance with real facts will, in the end, be better for all parties.
Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear -- milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.
Re:Security derives from biometrics. (Score:2)
Biometrics can help - think of them as a very long password that can't be forgotten. But they can be stolen.
--
A Rule of Security (Score:2)
One of the things I've noticed in general terms is: the more common a security device is, the more people know how to defeat it. This is true of any form of security, both cyber and physical.
There are many examples of this in the real world. Several methods have been developed to open Masterlock brand pad locks, not because they're inferior but because they're popular. The same goes for standard deadbolts, they can be opened in seconds with a vibration pick. And in the electronic world it's the most popular mail servers and web servers that get hacked. The point is that any thief is going to research the most common methods of protection and ways around them.
This validates the idea of security through obscurity in my mind. Once you've got some good, fundamental, popular security set up, throw something different into the mix. Something no one is going to expect, something that is not necessarly as effective as it is unique.
bycicle (Score:2)
Most security is like when i was younger and owned a bycicle. I would lock it up so the casual ding dong wouldn't walk off with it. Most people didnt walk around with bolt cutters to rip off my bike. Why? Most people are honest. Now there are some where IF the oportunity arose they would swipe my bike (ie i left it unlocked).
how does this relate. its simple. Most ding dongs (aka skript kiddies) are just snorking someone elses keys and seeing if the key fits the lock. If it does they got a bike (or data). Now the ones that scare me are the ones that take a piece of metal twist it around and pick the lock then lock it back up and walk of and wait for a better time... The lock is like the obscurity it adds one more thing MOST people dont bother doing (ie bring along a bolt cutter). Its the ones that bother you should worry about.
Depends what kind of obscurity (Score:2)
This depends on the kind of security you are talking about. People that decry security through obscurity are usually talking about trusting an obscure algorithm or method. They recommend that you instead act as though everyone knows what algorithms and methods you use, and that the only thing you trust to be obscure is your password (or your public key, or whatever).
The whole point of this is that most real world systems need to be distributed to a bunch of people, not all of whom can be trusted. Especially if you are developing a security system that's intended to be released to the public at large (a commerical software product, or open source system), you should expect that eventually the script kiddies will get their hands on the algorithm that has been distributed. And then once people figure out the algorithm, through social engineering or whatever, you're screwed, and everyone's security disappears. At this point, getting the security back means getting a whole new algorithm together, writing new code, and more or less redoing everything from scratch.
If you have a security system whereby the security only depends on the password or the key, you don't have to worry about social engineering so much. If someone gets their hands on the password, it's not such a big deal to come up with a new one. In fact, you can distribute your security system far and wide, and everyone can use different passwords or keys with almost 0 extra effort. If anyone's security is breached, they can come up with a new password or key with almost zero extra effort.
That's the big advantage of ditching security through obscurity. 1) If you do have a security breach, it's easy to fix, and 2) Lots and lots of people can use the same algorithm (which was pro'lly hard to come up with) by doing a minimal amount of work (coming up with a password or key), and they won't all be able to breach each other's security.
redefining the term? (Score:3)
Lulled into a false sense of security. (Score:4)
It is the thinking that your server is secure and untouchable that is the problem