Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

When "Security Through Obscurity" Isn't So Bad 152

Erik writes "In this article, Jay Beale (Bastille Linux Project, Mandrakesoft) explains why Security Through Obscurity is actually a really good idea if you do it right. A good read for sysadmins." Agreed... a lot of really interesting points well written and entertaining. For starters you can't rely just on obscurity to keep you safe. But it doesn't hurt to obscure things sometimes just to make it tougher for your attacker.
This discussion has been archived. No new comments can be posted.

When "Security Through Obscurity" Isn't So Bad

Comments Filter:
  • by Anonymous Coward
    If you can post network diagrams and versions of services and servers and still feel comfortable with your security model, thats better.
  • by Anonymous Coward
    The big point about security through obsucurity being bad is not that hiding things is bad, it's that hiding mechanisims is bad. Mr. Beale fails to shine adequate light on this in his article. It's clear that he understands this, and he hints at it in his article, but it deserves more prominance. The problem with obsucing mechanisms is that if you keep your mechanism to yourself, you won't find all the ways it can break, so it won't work well.

    Examples of this abound in cryptography. One example is the Skipjack algorithm. It was developed in secret by the NSA, probable the premier collection of cryptographers in the world. It was released in tamper-resistant hardware boxes, so as to preserve the maechanism's security. However, it was quickly reverse-engineered and found to have huge weaknesses.

    The same goes for any other computer system. If you use software (mechanisms) that were developed in secret, their flaws will remain hidden from the creators, and go unfixed. Someone will figure them out, and those bugs will be exploited. It is preferable to use software that is subject to the harsh light of public scrutiny, so that its bugs will be found and fixed. This applies very well for cryptography, which changes rarely, and less well for things that change more frequently. Reasonable people may differe about what change rate makes it perferable to have secret mechanisms, but it's not unreasonable to say that it's better to have your operating system and server software available for general scrutiny. Folks like Mr. Bartle and MeowMeow might disagree.

    But the point is this: Obscuring your mechanisms is bad, obscuring your data and use of the mechanisms is good. Mr. Bartle rightly points out, most security these days rests with the obscurity of passwords. He does not, however, give a general framework of what to obscure and what to reveal.

  • by Anonymous Coward on Monday July 23, 2001 @01:22PM (#66740)
    The essential difference between obscurity and secrecy is that something which is obscure can be learned by research or figured out, while something secret can't be figured out from available data.

    For example, if everyone's password was their spouse's or pet's name, that would be a kind of security through obscurity. This is obscure information to an outsider, but is vulnerable to someone who does their research. Whereas a true-random password would need to be stolen or guessed by brute force.

    There isn't always a sharp line between the two, but it is a useful distinction.

    Things like encryption algorithm used are almost always merely obscure (the general <i>method</i> of security is always obscure, but sometimes an exact algorithm is truly secret), while passwords are ideally secret, but often merely obscure in practice.
  • WebStar on Mac OS? ;-)

    --

  • by defile ( 1059 ) on Monday July 23, 2001 @12:53PM (#66742) Homepage Journal

    When does something stop being 'security through obscurity'? Depending on how you look at it, all forms of security, (or at least most of the ones employed over the internet) are based on picking access tokens that are really hard to guess.

    Running a service with no authentication on a random port isn't great security, but in principle, it's the same kind of security as running on a well known port and requiring a unique access identifier and passcode. It's just harder to guess, but still fundamentally the same.

    Real security would be achieved through schemes where all of the knowledge in the world won't gain you privileged access.

  • by David Price ( 1200 ) on Monday July 23, 2001 @02:49PM (#66743)
    Why is it okay for Joe Sysadmin to obscure details of his network configuration, but it's better for software writers (and particularly cryptologists) to release the details of their work?

    The answer is simple: ease of review. Obscurity is meant to put stumbling blocks in the path of those who desire to review the system, for whatever motive - be it academic curiosity, security assurance, or even to learn how to penetrate into it. The hidden web server trick described in the article, closed-source security software, and proprietary crypto are all examples of techniques that are meant to obscure and thus make review difficult.

    The question of whether to obscure, then, reduces to whether you'd like the system you're building to be reviewed. There are several very bad reasons that could motivate you to hinder review of your system: attempting to hide security flaws you either know or suspect to be found therein is one of the bigger ones.

    But the decision to impede review can be perfectly reasonable - depending on who the reviewers are likely to be. If you know, for instance, that your community of reviewers includes honest, skilled people who want to use your product and who will alert you to problems that they find, then that's a very big reason not to obscure anything. This is what motivates Linus, the Apache group, the GnuPG folks, and everyone else out there tirelessly trying to produce systems that function in the most security-hostile of environments. These folks have literally thousands to millions of users, almost all of whom are honest, and many of whom are skilled enough to discover flaws in the system.

    Joe Sysadmin doesn't have that kind of community. His users are very likely incapable of discovering security flaws, or if they are, unlikely to share the information they find with Joe. The majority of people who might be interested in reviewing Joe's network are malicious and intent upon using any information they find to the detriment of Joe and his users.

    In this case, the decision to put up walls of obscurity is as much of a no-brainer as the decision to use an open-source web server. Joe has assessed his community of potential reviewers and has determined that, on the whole, he'd rather not have that set of people learning things about his network. He will certainly use products that have proven themselves under strict review, but he is under no obligation to describe to anyone how he's configured his network. In his situation, doing so would only undermine his security.

  • The parent post wasn't talking about computer systems, it was talking about crypto systems. And there are well known perfect algorithms -- like one-time pads -- and well known exponentially difficult algorigthms -- the proven public-key algorithms. (e.g. DES, AES)

    1. DES and AES are symmetric key systems, not public key.
    2. When people say that breaking cipher X is exponentially difficult, its really shorthand for "there is no known algorithmic attack against the cipher that is significantly faster than checking all the permutations of keys by brute force". It may be that brute force is optimal but unless you've actually done a proof, this is only a conjecture. Look at the improvements to DES that the NSA suggested to IBM- these changes fortified DES against cryptoanalysis that wouldn't be public knowledge for almost another 2 decades- yet most people had no basis for making this judgement- just as we can only guess that AES is secure against all possible future attacks. This doesn't mean that the algorithm doesn't matter- clearly DES is stronger than rot-13, but you shouldn't assume that an algorithm alone brings security

  • Actually, it's quite possible to write a program that makes automatic exploits for various sorts of crashes. It's somewhat easier to find a security hole in source code (because you can figure out where it does things which are somewhat dubious), but if you discover some behavior that crashes a program, it's easier to write an exploit for it than actually find the bug in the source.

    Besides, unless a program really hasn't been looked at before or the bug is of a newly-popular class (e.g., format string vulnerabilities), someone has probably already found all of the obvious bugs. The more likely case is really someone testing a program with a lot of different weird situations, which is just as easy with a binary.
  • How do you tell which internal machine spouted the mail?

    That is what my email logs are for. All the interesting header bits are removed by the final outgoing mail server after it has been logged. Pretty simple.

  • Or, for that matter, for your co-workers or whoever inherits your systems. Obscurity can improve security, but at a dreadful cost: maintainability.

    Within certain contexts that's true. But in some cases, obscurity doesn't sacrifice maintainability much at all. The example used in the article was changing the port number a certain intranet webserver is run on. This type of obscurity is fairly benign.

    My view is that obscurity should be the last layer applied to any security system. And as you point out, if the cost of obscurity is too high in terms of maintainability, maybe it's not such a good idea. A system ought to be secure without any obscurity, but having a "sane obscurity layer" sours the pot for the kiddies.

    Jason.

  • by Uruk ( 4907 )
    I thought this type of stuff was pretty clear. I mean, "Security through Obscurity is bad" as a statement certainly doesn't mean that giving information out about your system is a good thing.

    The most important point of the article is that security through obscurity is terrible if and only if it's used as the only layer of security in the system. As another layer, it can't hurt at all.

    I think it's like wearing a bullet proof vest that isn't rated for large caliber weapons. Sure, it's not going to help you out if some loser with a .45 comes along, but it may help you in other situations, so it's not totally worthless.

  • that it's a lot easier to write a security fix when you have the source code.
  • Unless your passwords are out in the open, you're using some sort of security through obscurity as well.

    It will be interesting to see what kind of authorization will exist if mind-probing would ever be an easy-to-use and reliable technique (if possible at all).

    Oh, and probably a zillion other times where you've done something (not necessarily computing related) and thought it was safe just because the odds people would find out where neglectible.

  • ...known that djb writes his own libraries for his programs.

    Neat! So qmail doesn't use libc? Nice hack. Too bad you're relying on one guy's coding skills to keep it all secure.

    ...I don't need schmucks using my wireless T-1...

    So you'd best secure it, buddy. This one is so astoundingly dense I don't know where to begin.

    (Insert your poorly-reasoned, hate-filled "I'm better than all of you!" response here)

  • Email headers are generated for a good reason: mail routing depends on knowing the path from whence mail came. Otherwise, where do mail bounces go? And what happens when errant or spam mail originates from within an internal network? How do you tell which internal machine spouted the mail?

    As another poster said, any attempt at obscurity (especially in this case) results in an exponentially greater burden of maintenance.
  • by ergo98 ( 9391 ) on Monday July 23, 2001 @03:19PM (#66753) Homepage Journal

    Your claim (which contradicts the entire point of the article, which is that a little obscurity on top of a secure-as-possible system can't hurt) doesn't circumvent his argument whatsoever, and I really don't see why there is such an aversion to the idea of obscurity on here.

    His point, which is very correct, is that (using the website as the example) someone has to be very "noisy" to actively seek out non-standard ports/services: If there is a hacker targeting your system (s)he will leave a lot more fingerprints and evidence of their actions doing a cascade scan through your servers/ports than if they simple waltzed in and connected to private.company.com. The point is that each additional piece of hidden information is one more hurdle for the prospective hacker to have to jump over: It's a shitload harder to hack an unknown webserver on an unknown machine through an unknown firewall than it is to hack IIS 5.0 on Windows 2000 SP 1 pre-HOTFIX XYZ running on port 80 at intranet.company.com (hint: In the first case there will be a swath of evidence attempting various tactics to determine what the OS/webserver/firewall restraints are. In the second Jimmy pulls up black hat exploit #1027-D [the one that the public doesn't know about yet] and applies it against the server. Instantly the server is ownzed and there are zero tracks because to the other systems everything looks fine. Jimmy puts in his keystroke grabber, cleans up his tracks, and disappears into the night).

    The concept of enhanced security through obscurity is absolutely as clear as day. Pretend that I encrypt a piece of information to send to you: Now of course I'm going to pick a very secure algorithm so let's say that I go with Twofish. Now to the best of my knowledge it is a super secure algorithm and I'm safe and there is zero chance that it could ever be broken, but pretend that somewhere someone out there knows how to break Twofish with just a month of computer time: Do you think that they'd waste the month if they were unsure what algorithm you used? Pretend that they see an encrypted file called stuff.enc, versus bank_numbers.twofish: Which one gives them a headstart and motivates them? The standard (moronic) reply to this is "Well use a secure algorithm/software/OS/etc.!" however that is a foolish statement: Many algorithms/software/OS' have fallen in the past after years of people super-duper-assuring you that there is absolutely nothing wrong. So in other words unless you are absolutely sure yourself about every piece of software and algorithm that you use, a little obscurity can't hurt.

  • by crovira ( 10242 ) on Monday July 23, 2001 @02:25PM (#66754) Homepage
    And from a sufficient multi-dimensional sample. Nothing else works.

    Try running a different kind of Turing test. Not one where you're trying to prove how intelligent or self-aware or witty or urbane you are but just who you really are.

    That test immediately requires a web of trust, that someone or something we can trust be able to vouchsafe for you, and a web of deceit, that that someone or something we can trust be able to recognize you somehow in such a way that we can all trust the process.

    The current authentication schemes usually fail by having a web of deceit that's too broadly woven. The senses we have provided to our systems are ridiculously inadequate. For now.

    Lets create a system which can authenticate that you are you. It has to know who you are by virtue of your having been presented to it once under trustworthy circumstances.

    What you know is useless.

    It should be able to authenticate your cadaver. So much for all the password schemes in the world. Period.

    It should be able to identify you AS a cadaver. That means that the bio-metric data must include measurements of things like temperature, heart-beat rate, eye movement, involuntary tremors and other things which correlate to identify you as you.

    Listening to a "Rich Little"-caliber mimic on one of his good days will fool the blind but the disguise is blown the moment you open your eyes. Therefore the bio-metric data must be multi-dimensional.

    Listening to you say a common phrase as you stand in front of it (actually you'll be potentially surrounded by its sense organs,) it should be able to identify you from anyone else on the planet and tell not only if you're you, but if you're angry, in distress or just inebriated.

    And if it doesn't recognize you, you can go suck an egg or spend a night waiting for your attorney.

    Until then security is mere mental masturbation.
  • Actually, those aren't examples of obscurity but are actually real security - they're just like having a password (a possibly-easy-to-guess one ("speak, friend, and enter"), but still a password). In both cases the schematics of the door mechanisms, locks, etc. could have been laid before a thief, but they still couldn't have gained entrance without either a brute force attack or else knowing the shared secret - the password (I'm assuming here that neither type of door really had any mechanical defects, etc.)

    Security through obscurity was more like Smaug the Dragon - if anyone could see the schematic of all of his armor, they would have immediately identified the weak point.

  • If you take your computer security seriously, and actually have your tripwire databases on read-only media, then your standard intrusion recovery method would find all infected binaries, including the compiler, edited configuration files, infected kernel binaries, stealth modules (adore), etc etc.

    Of course, if you just mess around and after a breakin assume that patching the backdoor is enough to clean your system... hmm, then I guess you're asking for problems.

    A bunch of rootkit backdoors don't even use existing binaries, they just run their own login service, hidden by a special kernel module.

    Other tip besides tripwire: www.snort.org

  • Not to think of even people relocating services to ports above 1024, so that if the daemon can somehow be stopped/crashed after a compromise, the attacker doesn't even need root to bind to the ports to 'emulate' the service and pickup lot of information from people/other systems that thought they were connecting to the legitimate service...

    Security through obscurity is quickly a victim of security mistakes (plus the _false_ sense of security is has given the implementer).

  • And the great thing about is that when they find the safe, they can tinker in it on that tucked away little place where you hid it where nobody is looking anyway, so they won't get caught trying to break the safe.

    Longer to find it, often not: You know how they would find the safe? They simply follow you when you go out to check on it. Bye Bye security, it helps nothing.

    Often obscurity is not what it seems, the obscurer will think it's obscure, but the safecracker will often quickly find a way to figure it out.
  • That's why passwords are a flawed system and we need to get biometrics into wider acceptance.
    Oh great, so instead of shoulder surfing they just steal my hand/eye/dna?
  • Naq V'z jvyyvat gb org gung lbh'er cerggl hahfhny: zbfg crbcyr jvyy unir fxvccrq cnfg guvf gb ernq gur arkg cbfg va Ratyvfu, orpnhfr gurl pna'g or obgurerq... fb hayrff guvf cbfg trgf zbqqrq hc V'ir rssrpgviryl ceriragrq zbfg crbcyr sebz nethvat zl cbvag. DRQ. ;)
    Hagvy fbzr fznegnff yvxr zr gnxrf gur gvzr gb eha lbhe cbfg guebhtu ebg13 juvpu pbzrf jvgu gur ofqtnzrf cnpxntr naq cbfgf gur cynvagrkg irefvba. Gura ntnva, gung erdhverf fbzrbar npghnyyl gnxvat gur gvzr gb qb vg. ;=)
  • One feature of secrecy-based systems is that it is pretty easy to change the secret, if necessary. Obscurity-based systems often make it impossible to change the secret.
  • If you turn up a new shirt maker (or do a run yourself), I'd like to know. Heck, I'd be curious to know who has the copyright on the graphic now.

  • I couldn't find the shirt, but have you seen the graphic?

    http://www.eff.org/Misc/Graphics/nsa_1984.gif

    There's no mention of copyright. Perhaps contacting EFF and see if they have it and will let you use it? At the very least, I suspect they could set you up with someone. At that point, you could print a run of them youself.

  • If I had any modpoints, I would mod you into oblivion. How dare you call the CBM64 unpopular!

    ----------------------------------------------
  • Offense almost always has the advantage. They get to decide where when and how.
    That's why lack of source hurts the defense more than the offence. The offence just needs to find something that causes the target to misbehave and trace it at the machine-state level. Should be pretty good pickings, especially if due to some subtle compiler or library error.
  • .. And it circumvents his whole argument..

    Using obsurity as a security method does hurt your security, because it provides a false sense of security..

    In his example (hiding the port your IIS server is running on) - it may protect you from a script kiddie. By hiding the port number, you might think "oh, it's unlikely that someone will find this." - which is a very bad thing, because if it's there, someone will.

    Obscurity is ALWAYS a bad thing, because it leads to a false sense of security.

    One other nitpick is that keeping a password secret isn't obscurity - it's a method of authentication. (I am me because I know my password.) Obscurity is hiding the existence of something.

  • Reading about the idea for slowing down the port scanners gave me another idea. I'm not positive how port scanners work, and I don't plan to do any extensive research to find out right now, but I
    know that to function they typically make a connection to every port they want to probe and see if they can complete a connection. Those scanners that are trying to be stealthy might not complete the connection after this point, but others might continue to at least recieve data about what server is running on that port. And this gives me an idea.

    Set up a LOT of servers on random unused ports on every system that will answer any incoming connections and print out a LOT of data VERY VERY slowly, such that it would send one character at a time and send each packet one byte at a time with lots of delay time in between. Make it short enough so the port scanner doesn't time out and give up, but will sit there and happily lap up the characters as they come through one at a time over a period of hours. This way, if a non-threaded portscanner were to stumble onto one of these machines it would essentially take that port scanner out of operation until the operator discovered the problem. Granted, this trick could be overcome with software on the portscanner side, but it might make the attacks a lot less fruitful for a while.

    -Restil
  • I don't get it. The main point of the article was that by having your web server listen on a non-standard port, an attacker would have to scan a large number of ports to find out where (or if) your webserver was running.

    Huh? Ok, maybe for an ultra-secure private site where you can tip everyone off who needs to know the "secret" port address to append to the url, but how does that help me? If I want people to get to my site, it had better be running on port 80.

    The underlying message seems to be that one you have changed the port your webserver listens on, you should install (portscan detectors, including Bastille Linux developer Mike Rash's upcoming Port Scan Attack Detector,) THEIR new portscan detector.

    Wait, is this an ad?


  • Return the string for MacHTTP (machttp.org).

    ----
  • Of course, they also say, if you can't be fired, you can't be promoted...
  • The previous poster is correct...

    both sendmail and qmail have their security limitations imposed by what they are designed to do...

    qmail has had no security holes in that it works as advertised... if I set up qmail to allow relaying from all ip addresses, that is a form of security by obscurity... it's "secure" as long as nobody tries my ip address, but otherwise the door is wide open.

    If someone does find my open relay, it's not a fault of qmail, and thus (rightly!) is not considered a security hole in qmail.. it's my fault.

    In order to increase security then, I have to reduce functionality... I cannot access my smtp server from any ip address... it must be validated in some way, either by using SMTP auth, or by limiting the ip's which may access the server.

    But then someone could spoof the ip address of an ip I've allowed relaying from, or they could discover a password from SMTP auth...

    Either way it's not a security hole in qmail, but a limitation in security imposed by the fact that qmail is supposed to be able to do something...

    The only way to 100% secure qmail is not to run it at all! (of course I'm not picking on qmail, but on all programs in general).

    There is a difference between security, i.e. deciding what tradeoff of access and functionality is acceptable, and security holes, software bugs which cause programs not to operate as designed.

    You can plan security to reduce the impact from possible security holes, and you can code well to reduce the occurrance of security holes, but both of these are just part of an overall security policy.

    Doug
  • > Show me a perfect system...

    OK -- one not connected to the Internet.

    The parent post wasn't talking about computer systems, it was talking about crypto systems. And there are well known perfect algorithms -- like one-time pads -- and well known exponentially difficult algorigthms -- the proven public-key algorithms. (e.g. DES, AES)

    > Would you give all potential attackers a complete list of your computers, all the software they run, and schematics for your internal network? Would you send them your ruleset on your firewall? Of course not! And keeping this information obscure is security through obscurity.

    Why not? If you're running OpenBSD, what do you have to fear? :)

    If you _depend on_ your firewall rulesets to remain secret for security, you have no security. Anyone with enough patience will crack your ruleset, if it's vulnerable. The article points out that obscurity only enchances security when your system can be shown secure without it -- that your firewall rulesets are correct.

    -_Quinn
  • Lol. That's got to be the best response I've heard all day! :)
    ------
  • Not at all.
    Suppose I have a system protected by a single password prompt, which uses a 40-bit key on some unusual crypto alg.

    If I tell everyone the algorithm name and keysize, it's only a matter of time before it's cracked. However, any attacker won't know you have to type "All your base are belong to us" at 3 letters/second, send a form feed, and append 'brouhaha' to your password as well as getting the encryption right. The number of hackers who have a go at your system will be drastically cut down by the obscurity in place.

  • Can you elaborate?
    (in the field of security, of course, not English in general)
  • I was thinking of doing the very same thing against the red-whatever worm that tried to take out the whitehouse.gov site -- have apache detect when the remote end requested the 'special' URL and slowly feed it /dev/zero until the remote end dropped the connection.
  • Yeah, for about five minutes until they figured it out.
    At which point, they get some of their 0wned boxes to connect to your servers, hence wasting *your* cycles.
    In the meantime, they have figured out which ports on your machine do *not* behave the same way, and hmm, you're 0wned too...
  • You may fool all the people some of the time, you can even fool some of the people all of the time; but you can't fool all of the people all of the time.
  • On the other hand, if I use a strong (not so reversible) algorithm with a nice large keyspace (40 bit encryption has over 1 trillion possibly keys)

    Umm, when did 40 bit become strong encryption? Must have been listenin' to Freeh's voice boming out from the black helicopters for too long. Given the credentials he totes around on this stuff, this has got to be a mistake.
  • Even a semi-advanced home user knows that 40 bit is weak. Many online banking and brokerage sites require 128 bit browsers. Even if they don't know how quickly EFF broke DES, they know that 40 bit is private, but 128 bit is secure.
  • That's what security is all about. If someone's determined, they will get you. Even if it means taking a bulldozer to your datacenter and hauling the goodies off in a pickup truck. From the flimsiest of doors to the most impregnable security infrastructure, you build it hoping that no one looks for you, that if someone does you can find them first, and that most people, upon finding you, give up and go find playground.
  • by camusflage ( 65105 ) on Monday July 23, 2001 @12:48PM (#66782)
    I had thought obscurity went hand in hand with security. Stuff like trimming off your internal MTA's before sending things off to the internet and making sure that your firewall reveals no clues about what it's running. The more you make 'em work, the easier it is to catch 'em has always been my motto.
  • by dboyles ( 65512 ) on Monday July 23, 2001 @02:20PM (#66783) Homepage
    Or, for that matter, for your co-workers or whoever inherits your systems. Obscurity can improve security, but at a dreadful cost: maintainability.

    That's what I like to call "job security." :)

    Kind of like not commenting code. Make it impossible to interpret by anyone but yourself and you're simply increasing your value as an employee!
  • Thanks for so thorough a response :) Of course not every point applies to every situation. My main point is simply that attackers don't need to invest many resources to attack computer systems, especially when they have a tonne of free time and are supported by others (think teenagers). Defenders have large systems to manage and must invest a lot in terms of time and money to manage them securely. In this sense, attackers are at an advantage. Essentially, attackers have most of the advantages that defenders do, but also have some particular advantages of their own. So yah - just that it's easier to do damage than to prevent damage.
  • by matman ( 71405 ) on Monday July 23, 2001 @01:51PM (#66785)
    defense always has an advantage over offense, in terms of time and effort

    I would argue that this is not the case in the computer world.

    o A defender has many bases to protect; often with few resources.

    o The human resources needed to build an effective security team are incredibly expensive.

    o If an attacker compromises even one 'base', hiding tracks and stepping to other resources is often easy.

    o Even detecting a compromise can be extremely difficult, let alone determining what damage has been done, etc.

    o Attackers may attack at leisure and need only find ONE of many possible vulnerabilities.

    o Attackers can also attack with very little cost or consequence by bouncing through different proxies.

    o Defenders must obey the law, while attackers may not.

    o In the real world, defenders can launch offensive measures as a defense; you can't really do this in the computer world.

    o Attackers often find out about vulnerabilties in the defender's system before the defender knows about them

    I don't think that defenders have any advantage; attackers almost certainly have the upper hand. In the real world, the following factors make defense positions stronger in conventional warfare:

    o Defense can see that attack comming and anticipate what the attacker will do

    o Attacker's goals are often obvious

    o Attacker is out in the open, while defense has had time to build heavy physical defenses and stockpile resources

    o Attacker is unlikely to find a gaping hole in the defender's defensive measures

    o Attack is resource consuming
  • Nonsense, it just takes one step out of the process. Have you read eEye's Analysis of Code Red [eeye.com]? He didn't have the source, but he accurately predicted what it did and what would happen.

    And look, IIS is closed sores, and they still managed to get into a bundle of crap.

    Linus said: Given enough eyeballs, any bug is shallow.

    Hiding the source just cuts down the number of eyeballs.

  • PGP won't provide you security either if you have a crummy password.

    From a signature on Buqtraq: Security isn't a program, it's a process.

    But even open source software can't totally save you.

    OpenSSH didn't suffer from the same short password problem commercial SSH Secure Shell did. They have different licenses yes, but those aren't the only differences. :-)

  • PGP won't provide you security either if you have a crummy password

    As a matter of fact, that's exactly what it is designed to and will do. The whole point of PGP encryption is that it obviates the need for password authentication. The only password (in effect) is your private key. This private key should be kept safe at all times, but if it does not fall into the hands of your enemy, they will not be able to read your messages.

    The only place a password comes into play is the protection of your private key, but in all honesty you're already screwed if your private key has fallen into enemy hands. A short password is not going to stop any determined party from cracking your private key once they have access to it. For the better part of a decade, I've not used a password for my private key, because it's stored on removable media and only ever used on a trusted host. This is one misconception I'd like to set straight. PGP provides more security than could ever be provided for by a system that relies on a memorisable password.

  • If it takes you a few seconds to obscure something it seems like a reasonable amount of protection for effort. Likewise if you can plant a few fake trails for minimal effort.
  • In the Caesar cipher, every message is encoded simply by changing each letter of the original cleartext into the third letter after it in the alphabet. So, an A becomes a D, B becomes an E and so on. The word "security" becomes "vhfxulwb." Now, as soon as you know the algorithm you can reverse it, right? Just replace every letter with the letter of the alphabet that precedes it by 3. This is clearly a really weak system once you know the algorithm. The security of the system was only in the secrecy, or obscurity, of the algorithm.

    You'd better watch out... the FBI might get you for violations of the DMCA there...

    Now back on-topic... obviously there's no excuse for obscurity as the sole method of security, at least not for any sysadmin worthy of the title... but I would argue that security-by-obscurity is kind of a waste of time. Ideally, I'd want all my webservers, etc. to be just as impregnable at port 80 as on port 8000, so why bother hiding it?


    Zaphod B
  • by ahde ( 95143 ) on Monday July 23, 2001 @03:42PM (#66794) Homepage
    The reason security through obscurity doesn't work is because it is useless against an internal attack -- this includes, to a degree, social engineering. Changing the port number doesn't do any good at all if the attack is launched by a former employee.

    The reason security through obscurity doesn't work is it assumes a lesser effort provides a deterrent after a greater effort has already been spent. If the attacker can beat Kerberos authentication (or even a password challenge) the port scan becomes a trivial effort.

    The one advantage of the article's example is not security related at all. Filtering script kiddies saves a potential bandwidth hit by quietly dropping port 80 traffic.

  • But there's no point in using biometrics! I mean, for a local computer, when you can be sure that this information ie. fingerprint or whatever, really is coming from the natty little USB peripheral attached to it, it's not a particularly bad idea. Easier than remembering passwords...

    OTOH there is no point in the world in using this information across the net, because it becomes just like any other hard-to-guess authentication token and can be sniffed, copied, and sent round the place... plus, once somebody's cracked it, you can't change your fingerprint/retina/DNA... which means that any network using these measurements as your identification is either a) based on a quite amazing depth of trust/independent arbitration/separate cryptographic systems or b) flawed.

    Not that I spend all day thinking about these things...

  • by cburley ( 105664 ) on Monday July 23, 2001 @01:07PM (#66797) Homepage Journal
    Care to provide proof of that statement?

    (Not the part about the comparative ease of exploiting security holes in source code vs. binary code -- the part about OS advocates somehow "always" forgetting that fact.)

    In my experience, advocates of open source are highly aware that it's easier to exploit security holes in source code.

    After all, to do that, one must find them first, correct?

    And unless most or all of the people looking for them also intend to exploit them in an offensive way, the result will be that said holes will be fixed in a defensive mode vs. offensive attack faster than for proprietary, binary code.

    Remember, defense always has an advantage over offense, in terms of time and effort -- the "battle" is on the defender's own turf. Offense can make up for this inherent disadvantage to some extent by, for instance, having the element of surprise.

    For most such offensive counter-measures, having the source code is not an advantage, in fact it's a disadvantage, if the defense also has the source code.

    That's because the defense can deploy fixes to holes faster than the offense can attack them, so, assuming they both find out about them at the same time, the defense wins. (This is generalizing, by the way. Specifics always vary, of course.)

    Now, ask yourself this question: when it comes to binary-only code, who is likely to know more about the internal operations of the code -- the defense, in this case, law-abiding users, or the offense, willing to obtain THE SOURCE CODE through illegal channels??

    That's why Open Source has such an advantage here -- the defense, collectively, is encouraged to share, generously, insights about the software, while the offense must remain fairly isolated (a general rule about law-abiders versus law-breakers; while law-breakers sometimes pool their resources, they have an uphill battle with regard to such activities compared to law-abiders, who can do their pooling in the open and/or in hiding).

    When it comes to proprietary software, law-abiders are increasingly discouraged from sharing insights, info, breakages, fixes, etc., while the opportunities for pooling resources on the law-breakers' side remains pretty much the same.

    But there's a special prize law-abiders are denied that law-breakers have the opportunity to exploit should they gain access to it, when it comes to proprietary software: the source code.

    Once they have that -- and they will, if the software "matters" at all -- the law-abiders will be on the losing end of things.

    "If source code is outlawed, only outlaws will have source code."

  • >When does something stop being 'security through obscurity'? Depending on how you look at it, all forms of security, (or at least most of the ones employed over the internet) are based on picking access tokens that are really hard to guess.

    Thanks for qualifying this as security on the Internet. Note that there are a whole range of other forms of security. For instance a guard holding an M16 is a particularly effective form of security that has no secret component at all. An entirely different method is the "security camera". And a third is Mutually Assured Destruction.

    None of these three methods of security are prevalent on the Internet. Are there any others? Could any of these be used in place of traditional usernames and passwords?
    --

  • by StevenMaurer ( 115071 ) on Monday July 23, 2001 @02:33PM (#66805) Homepage

    In this [slashdot.org] slashdot article by a slashdot reader.

    The reader, being a Microsoft employeee, received a mostly negative response, whereas responses to this article are getting mostly positive reviews.

    Funny how much acceptance of the message is affected by who the messenger happens to be.

  • When you say "Security through obscurity is bad," what you really mean is "Security implemented solely through obscurity is bad."

    Er, thanks for clearing that up, guys.


    Dlugar
  • by mauddib~ ( 126018 ) on Monday July 23, 2001 @03:09PM (#66807) Homepage
    Welcome to obscur0S kernel version 1.0.0

    login: root
    password:

    > ls
    Tue Jul 24 02:06:22 CEST 2001
    > rm -n
    . ..
    > rm /bin
    rm: /bin: No such file or directory
    > rm /
    foo bar baz foobar
    > exit
    exit: Too few arguments
    > quit
    quit: Command not found
    > ^D
    > ^C
    > ^]
    <telnet> quit
  • Not at all.

    For a classic example, consider the Enigma. Security through obscurity, in that no-one knew how the rotors were wired, or what the codes were. Had a U-boat commander not omitted to destroy his codebook, the U-boats would have owned the Atlantic for the whole war, Britain would have been screwed, and the US would never have bothered joining in a war where they'd've lost most of their troop ships on the way across the Atlantic. Even with that and with super-geniuses like Turing on the case, if the Germans had changed the codes more frequently or changed the rotor wiring once a year, we'd still have been screwed simply through the time it took to reverse-engineer the wiring.

    Security _only_ through obscurity sucks when it isn't changed regularly enough. Security through obscurity is essentially communicating with a shared secret, the secret being the obscureness factor. Any codebreaker would tell you that the more signals you get with the same key, the easier it is to break those signals. So if you keep the obscurity and change the obscurity at regular intervals, no-one has enough time to reverse-engineer the old one before the new one is in place. This admittedly isn't great for stuff that's supposed to be long-term secret (eg. government stuff or bank details) but it's fine for something where the data rapidly becomes out of date (breaking news reports, for instance).

    Of course, if it's sending something like a credit card number then you want _absolute_ security, so that someone 2 years later can't automatically break into your emails. And then I will agree with you, obscurity sucks the big one.

    Grab.
  • Running a service with no authentication on a random port isn't great security, but in principle, it's the same kind of security as running on a well known port and requiring a unique access identifier and passcode. It's just harder to guess, but still fundamentally the same.
    It is not fundamentally the same, because in the second case, guessing is intractable.

    Let's assume for the moment in your above example that the only way into the system is through the front door of the one server in question. Then in the first example, an attacker just needs to scan a few thousand ports at most, a pretty quick matter. In the second case, if the username/passcode is chosen well (randomly), and hitting the server is the only way an attacker can check the validity of the username/password, then he must make (2^^N)/2 tries on average, where N is the number of bits in the username/passcode. Make N even a little bit big, and throttle how fast attempts can be made, and you can make the expected time to break in come sometime after our sun goes red giant. That is fundamentally different.

  • Remember, defense always has an advantage over offense, in terms of time and effort -- the "battle" is on the defender's own turf. Offense can make up for this inherent disadvantage to some extent by, for instance, having the element of surprise.

    Actually the entire field of guerilla warfare is predicated upon the complete opposite of your statement. Offense has the advantage over defense because they can choose the terms, time and location of engagement.

    Or to quote Warren Buffet instead of Chairman Mao, investing is like a match of cricket, rather than baseball: there is no penalty for not swinging the bat.

  • by shalunov ( 149369 ) on Monday July 23, 2001 @12:54PM (#66817) Homepage
    But it doesn't hurt to obscure things sometimes just to make it tougher for your attacker.
    Or, for that matter, for your co-workers or whoever inherits your systems. Obscurity can improve security, but at a dreadful cost: maintainability.
  • Ol bofphevat fbzrguvat nobhg bhe raivebazrag, jr sbepr na nggnpxre gb cbffvoyl tb guebhtu zber rssbeg gb yrnea gung vasbezngvba orsber ur pna rkrphgr uvf nggnpx.

    Rknpgyl. Frphevgl vf nobhg nccebcevngr cebgrpgvba. Nf na rknzcyr, lbh jvyy unir unq gb obgu qrgrezvar gur xrl sbe gur fvzcyr Pnrfne Fuvsg Pvcure V'ir hfrq gb rapbqr guvf nf EBG-13 naq gura npghnyyl tb gb gur gebhoyr bs qrpbqvat vg (juvpu znl be znl abg or rnfl sbe lbh, qrcraqvat hcba gur gbbyf lbh npghnyyl unir ninvynoyr gb lbh).

    Naq V'z jvyyvat gb org gung lbh'er cerggl hahfhny: zbfg crbcyr jvyy unir fxvccrq cnfg guvf gb ernq gur arkg cbfg va Ratyvfu, orpnhfr gurl pna'g or obgurerq... fb hayrff guvf cbfg trgf zbqqrq hc V'ir rssrpgviryl ceriragrq zbfg crbcyr sebz nethvat zl cbvag. DRQ. ;)

  • Ideally, I'd want all my webservers, etc. to be just as impregnable at port 80 as on port 8000, so why bother hiding it?

    This isn't about making your targets impossible to crack. It's about making harder to crack than the guy next to you.

    The slowest animal in the herd principal.

    Lets say me and you both run service A, when a remote exploit is discovered. Bob the happy script kiddy gets his scanner and starts looking for said service on it's default port. On your box he finds it fine, and cracks you. On mine he doesn't see it initially. So he skips me and moves on. Say bob _really_ wanted me, he would scan all my ports and find that service, but in the meantime I see a bunch of traffic searching my network in odd places.

    The slowest animal in the heard is already down, but in the few seconds I gained I realize some clever escape, and viola, I'm free. Obscurity doesn't make you uncrackable, but it gives you an edge.

  • by Xylantiel ( 177496 ) on Monday July 23, 2001 @03:29PM (#66823)
    This article depresses me because I know that many people will believe his argument and start to do the things he suggests. Only to the detriment of the IT community and it's users.

    It seems to me that most people pass through three stages of understanding:

    1. Security through obscurity is bad because everybody says it is and it sounds like it should be.
    2. Using obscurity to supplement security is ok as long as I have real security too. It just makes me less vulnerable to the kiddies.
    3. Doh! I was wrong, obscurity actually is bad for many reasons (discussed below).
    It's sad to see that an expert well respected by the community has only reached level 2 in understanding this concept.

    Now, why is using obscurity bad? I'll give several reasons.

    • Obscure security generally goes untested. This means that "real security" which was there wasn't actually as real as you thought. You end up getting bitten when it's important rather than when it's not!
    • Obscure security breeds insecurity through laziness: "oh I don't have to update that right away, nobody will find it before tomorrow" which turns into next week, which turns into never getting updated.
    • A secure network is one in which the attacker cannot get in even with full knowledge of the network layout. This is harder to do than using obscuration, but you end up with something that's actually secure!
    • Good security depends on you understanding your setup. Networks are generally very complicated just to serve their necessary function in the local environment. Adding useless complexity by obscuring simply makes the network more difficult to understand and thereby decreases security.
    • Related to the previous point: knowing your network's vulnerabilities and being able to accurately analyze any attempted attacks is essential. Obscurity makes it more difficult for the attacker, but it also makes it more difficult for the home team. If you can't track the attacker's path through your spaghetti of accumulated obscurations you can't secure the network against the next similar attack. (note all successful or partially successful attacks are generally a string of smaller attacks on subsystems, which must be strung together to get all the way in.)
    • All security measures are a tradeoff between usability and security. Obscuration, on the other hand hurts usabality without actually providing any security benefits. Don't think it doesn't matter if you do something in a totally unconventional way: it creates problems for your users because things are non-standard (and may not match documentation!), it hurts your employer in terms of the others that will have to maintain the system, and it hurts the community by complicating issues unnecessarily. Spend you usability trade-offs on real security. I'm sure that will be enough if you're actually secure.
    • Also, most of us aren't as ingenius as we think we are. In many cases by doing something you think is clever you're more likely to make things worse rather than better through unforseen consequences. I think the phrase is "keep it simple stupid"! Unfortunately this is especially true for people in class 2 above, who by definition haven't reached class 3, where they understand this.
    I could probably go on, but I'll stop now. I think these are the most important points. It's really too bad when people without a lot of foresight get in positions of influence.
  • Ideally, I'd want all my webservers, etc. to be just as impregnable at port 80 as on port 8000, so why bother hiding it?

    Because its very hard to reach that ideal, thats why you hide it. As said in the article, it makes it harder for the script kiddies to find. Now for the non script kiddies...

  • I was half joking, in case you couldn't tell. The problem with your argument is the fact that CGI in itself is inherently secure. The only security holes which open are a direct result of the incompetence of the programmer who writes sloppy middleware code, or sloppy web server CGI handling code. For example, Apache compiled with no modules at all will not be vulnerable to any known CGI based attacks. DDOS attacks OTOH are a threat to any platform/OS combination, and must be either averted or ridden-out, depending on the situation. There is no excuse for buffer overflows ;-) FS/open-source patches help this quite a lot (at least on x86 and ppc machines)

    Well, your fingers weave quick minarets; Speak in secret alphabets;
  • Security through unpopularity. Let's face it, if you run a web server under BeOS, the likelyhood of getting hacked compared to running IIS, is roughly the same as the ratio between BeOS users and Windows users. Cheap ass security advice: find the most unpopular system you can (put those Commodore 64s to use!), lock it down to the best of your sysadmin abilities, and start praying!

    Well, your fingers weave quick minarets; Speak in secret alphabets;
  • It should be able to identify you AS a cadaver.
    I have always wanted to restrict permission to use su to cadavers.
  • The root password to this server is Adobe378210
    If you log in and are not authorized by Adobe, you will be in violation of DMCA statutes and join Dmitri Sklyarov in jail.

    Have a nice day.

    Login:

    --
    All your .sig are belong to us!

  • by unformed ( 225214 ) on Monday July 23, 2001 @01:13PM (#66833)
    in Applied Cryptography...

    He gave the example of:
    If you have a letter you don't want anybody to read, and you put it into a safe, and hide it in New York, that's not security; that obscurity. You're hoping that they (your enemies) won't find the safe.

    However, if you give crackers the safe and a diagram of the lock, and they still can't figure out a way to lockpick it, then that's security.

    However, hiding the safe in New York in addition to securing it doesn't hurt at all. It just takes them longer to find it.
  • Most readers of /. are not so foolish as to subscribe to the falacy that this article is endevouring to correct. Most readers here realize the meaning of the phrase 'secutity through obscurity is bad' and recognize that obscurity doesn't harm your security at all (provided there is a more substancial mechanism than that obscurity), although it really doesn't add much either. It may slow your attacker down a little, it may make him more visable, possibly, but all in all it's a waste of time except in the cases one of the previous posters mentioned regarding non-prolifaration of information through or about your firewall.

    Particularly entertaining in this article was the following:
    At this point, is there any harm to hiding the name of the machine and the port number the server is running on? Really, stop and think about this. Does it hurt your site security at all? No, it really doesn't. Your good access control, in the form of strong authentication, is still present. All we've done is made the server slightly harder to find! See, so long as you understand that the server location and port number can't serve as a method of authentication, you haven't harmed your security in the slightest.
    Are there really people out there who would choose not to obscure something like the name of a server because they thought it would harm their security ? You can't be serious...

    Like I said. /. is certainly not the intended audience of that article.

    --CTH

    --
  • by MeowMeow Jones ( 233640 ) on Monday July 23, 2001 @12:51PM (#66836)
    that it's a hell of alot easier to write a rootkit against source code than it is against a binary.

    Trolls throughout history:

  • by MeowMeow Jones ( 233640 ) on Monday July 23, 2001 @02:08PM (#66837)
    rootkits indtroduce their own exploits on a compromised system.

    So a big apache expliot comes out, and a half hour later there's a patch (thanks to open source) and you apply it/ recompile. Then you look at the apache log files and don't see any unusual activity. So you're safe, right? wrong. Your system was compromised and a rootkit was installed. It cleaned up all the logs. It added a backdoor to getty. It modified your MD5 checksum verification. It modified your rpm so that it points to the hackers server, no matter what you say. It modified gcc to include a backdoor into any program that requires authenitcation and insert this code into any gcc recompiles.

    Do I really need to prove that it's easier to change:

    if (checkPassword(password)) {goCrazy();}

    to

    if (checkPassword(password) || !strcmp(password, "k00ldud3") {goCrazy();}

    than it is to use a disassembler on an executable with no symbols to figure out what the hell is going on and insert a back door? Not only does this require a much higher level of expertise, it also requires significantly more time for the person who can do it.

    Trolls throughout history:

  • Anything that uses obscurity as part of its security scheme is trying to make up for what is essentially a design flaw. If there is nothing else that reading Slashdot has taught me, it is that companies cannot rely on keeping their encryption algorithms secret if they don't want people cracking them.

    Show me a perfect system that cannot be broken into.

    The real secure systems can let people know all that they want about them, regardless of their intentions. It really won't matter in the end, from a security standpoint, because well-designed security frameworks are undefeatable through principle, not through secrecy. To say otherwise would be just like Micro$oft denying that a security hole exists while they drag their feet in releasing a patch for said vulnerability.

    I think you miss the point. There is no such thing as 100% secure. That is a fact and you have to get used to it. A bug might get discovered tomorrow that might cause your system to become vulnerable. Or, a previously unknown unchecked buffer could be found by crackers before it is found by security personnel. You have to make these assumptions if you want any real security at all.

    By limiting the ammount of available information, we can force the attacker to expend more time (and make more entries into our logs) in their attack. This can discourage many script kiddies who do not know enough to design their own exploits and it can also force a knowledgable attacker to (hopefully) expend a lot more effort on a (hopefully) unseccessful attack.

    Would you give all potential attackers a complete list of your computers, all the software they run, and schematics for your internal network? Would you send them your ruleset on your firewall? Of course not! And keeping this information obscure is security through obscurity.

    As the O'Reilly book "Building Internet Firewalls" points out, though, running services on non-standard ports provides little obscurity and really inconveniences your users, but preventing important information from leaking out to the internet is what obscurity is all about.

    Sig: Tell all your friends NOT to download the Advanced Ebook Processor:

  • Didn't you read the article? He explained why using another port is a good idea. You still have to secure it like you would port 80, but someone scanning default ports only would miss your server altogether. Someone scanning over lots of ports would find it, but scanning lots of ports makes much more "noise" which you can then pick up.

    I still would not use that as my example of security through obscurity. It provides little obscurity because the information is still available. I agree that it might be helpful in some limited instances, but there are better examples (like MTA trimming, DNS obscurity, etc) which are more generally useful.

    Once one guy found your service, he could tell others. That way, once you saw a portscan, you would have to change ports AND let all your users know... Again, not the best strategy.

    Sig: Tell all your friends NOT to download the Advanced Ebook Processor:

  • Remember, though, security exists in an inverse relationship with functionality. You can completely secure a computer, but it won't do you much good.

    Obscurity is just a way of shifting the balance to give you more functionality at the same security level, or more often, more security at the same functionality level.

    Sig: Tell all your friends NOT to download the Advanced Ebook Processor:

  • Have you ever tested this theory yourself? Or are you just repeating what you've read in a book?

    my firewalls have generally broken active-mode ftp and authoritative DNS lookups. Of course stateful inspecton would help somewhat but my point is that these are disabled because allowing them in with a rule-based setup would allow too much potential for security compromise.

    So yes, I have tested this. I could use netfilter to add usability but it is plenty usable and I am not sure whether I would get additional security. The only completely secure system is unplugged, in a safe, without network connection. Even then, someone could break in...

    Sig: Tell all your friends NOT to download the Advanced Ebook Processor:

  • The O'Reilly Book, "Building Internet Firewalls" makes this point. Obscurity as a part of a strategy is one thing. It is not the whole policy whe used well.

    Furthermore, running services on non-standard ports provides no real obscurity but it makes administration substanitally harder. In this way, it is in no way good policy.

    Good obscurity tactics involve hiding internal systems from your external DNS servers, trying to avoid letting people see what OS you are using, etc. Forcing people to do guesswork is good, but it should not inconvenience your users (that should be left for the more fundamental security measures.

    Attackers should be forced to make their attack based on limited information. By limiting the information they have, you can limit the effectiveness of the attacks and maximize their effort. But it should not be used as a standalone strategy.

    Sig: Tell all your friends NOT to download the Advanced Ebook Processor:

  • by tim_maroney ( 239442 ) on Monday July 23, 2001 @12:55PM (#66843) Homepage
    The question that ran through my mind when reading this piece was whether it had anything to do with security by obscurity. There's nothing "obscured" about an unpublished link or a non-standard HTTP port. They are completely understandable, just a little harder to find if you haven't been told where to look.

    Security through obscurity is usually discussed in terms of hiding encryption algorithms and security protocols, which is a totally separate issue. Read this article [wideopen.com] by Simson Garfinkel on the subject, for instance.

    So to me the article seemed like a giant non sequitur.

    Tim

  • This is modded up as "funny," but the author actually has a good point. Why is it that we haven't seen a mass wave of Linux viruses? Yep, that's right - Microsoft desktop OSes are a much larger target.

    Since you can't be 100% secure, security is all about being a very small target. Running a BeOS or Commodore 64 web server would definitely do that for you.
  • by Bonker ( 243350 ) on Monday July 23, 2001 @01:59PM (#66845)
    'cause the company paid for it. What there was of it was fairly neat... How best to lock down apache, IIS, Netscape server, etc...

    The bulk of the time in class, however, was spent discussing how to arrange things to make everything else obscure as possible.

    We talked about things like firewall masking, where firewall software automatically stripped out the software version indentification strings out of ping and logon responses. Stuff like that.

    The ethos the Verisgn teacher had was that there was only so much you could do to protect a 'celbrity' system, even if you went so far as to impliment a client-certificate setup. The best way to secure a system was to make it difficult to find as well as crack.
  • There are two kinds of "Security through obscurity", and it confused some people. (Insert funny joke about clueless managers and Microsoft-heads here)

    First there is the bad kind: you're a software vendor and you hope that nobody will notice that your "secure" software uses ROT-13 instead of RSA. You are protected by the DMCA anyway (remember: crack ROT-13, go to jail).

    Then there is the second, good kind: if I'm an admin, there is not reason I shouldn't strip outgoing email of any headers that my reveal the structure of my internat network (some people ARE arguing that you shouldn't do that.)

    I don't really like this "security through obscurity" thingie. Let's make the difference between full specs release and not telling anyone more than they should know. This is the way to built a really secure network.
  • perhaps if they beefed up the encryption in passwd files, shadow passwords would not be such an issue, i mean, that is what encryption is for is it not?

    No, the security of the password file has almost nothing to do with the security of the password algorithm. I am not aware of anyone breaking the DES encryption algorithm to break passwords.

    The vulnerability comes from the ability to obtain passwords by brute force examination of the password space - a dictionary attack. Using AES would not make a difference because the issue is the number of possible passwords. And adding a random letter or number in the middle of the password 'pass3word' does not change the search difficulty as much as the naive sysops think.

    In fact the stupid constraints (case sensitive, must have symbol) used to patch up weak password infrastructres should really count as security through obfustication at this point. The added security is measurable but woefully insufficient and really inconvenient for the user - why can't the password alg check for both positionws of CapsLock?

    There are much better security measures than shaddow passwords, however at this stage relying on the encryption alone is nowhere near sufficient.

  • by Zeinfeld ( 263942 ) on Monday July 23, 2001 @03:46PM (#66854) Homepage
    Back about thirty years ago most computer security depended on physical security alone, once you were in the door of the building you could pretty much do anything you wanted with just a little determination. Password systems to the extent they existed were pretty rudimentary and security was not considered as part of the O/S design, or to the extent that it was the 'security' mainly consisted of feelgood measures for marketing purposes.

    Security Through Obscurity was one of the slogans thrown back at the complacent computer vendors and sysadmins. As with any slogan though the medium does not allow for a sophisticated or subtle message.

    About ten years ago the UNIX comunity started to wake up to security issues. Previously the UNIX approach had been pretty rudimentary, most UNIX machines were departmental or personal workstations and security was a nice to have but relatively few people depended on it actually working.

    Then the Internet grew beyond academia and suddenly security started to matter an awful lot.

    At the same time traditional UNIX security nostrums started to be re-examined, like the lack of shaddow passwords. At the time the idea of protecting a password file with a password was considered by most UNIX advocates to represent the evil of security through obscurity. The UNIX one way encrypted password file was held up as the exemplar, to want to protect it was to fall foul of the sin of 'security through obscurity' and obviously anyone like myself who advocate3d the practice had to be a fool. Today anyone who doesn't use shaddow passwords is considered foolish.

    What it boils down to is that slogans have a tendency to turn into dogma. Then someone comes along and points out that the real world is not as simple as the slogan.

  • If you're going to prevent public review, you're taking the responsibility to perform internal review such that you are certain you have reached a level of validation equivalent to that provided by public review.

    Which is fine, but it's also why safety-critical or high-availability software costs 10-100x what consumer software does. One of its features is that there's no in-the-wild beta testing, and no tolerance for released bugs. You've moved that validation into the development itself. (Everyone stop looking so amazed; software development by people who want to make money at it has always been about how many bugs you can ignore and still get market share. Microsoft's monopoly means it can profit from selling essentially non-running programs. Compare also the low-end, pretty-package, sounded-like-a-cool-product shitware you have ten of under your desk and don't see any value in returning because you're tired of standing on principle in the 30-minute line at Fry's.)

    Shrouding the details of a network configuration implies that either you have adequately tested the network against boundary conditions, load, and pathological states, or you're willing to suffer failures and have your ass handed to you by your users who wish you'd just let them look at it because they know they would have seen this coming if they had.

    --Blair
  • Man, when I saw that topic, I took a double take. That is so far from convention, its almost flamebait! Of course its not, but rather a thoughtful analysis of when Obscurity is a tool, and in what contexts Obscurity is not only permitted, but the prefered option. It made me rethink my preconseptions a little.

    The key (pun intended) is to remember that no single tool will provide you security. Obscurity is the most trivial example. PGP won't provide you security either if you have a crummy password. So, you have to remember all the factors that are inhibiting security.

    We have seen examples in the past where closed source software has proven itself insecure. I remember the RNG weakness in Netscape. But even open source software can't totally save you. Open source is no more a cure-all then obscurity is an absolute forbidden. Designing good algorithms, educating your users not to circumvent "annoying" security measures, and the like remain integral to the problem of security.

  • I have seen the graphic. I was orginally produced by a college student, who I do not think had any affiliation with the EFF (can't be sure). I will drop my good friends at the EFF a line and ask though! Never thought to ask them before; guess I am not thinking!
  • Too bad you're relying on one guy's coding skills to keep it all secure.

    I dunno, no one has managed to claim the cash prize yet. Also, you aren't just relying on one man's coding skills, the programs are open source, and therefore many people can review the code for security.

    So you'd best secure it, buddy. This one is so astoundingly dense I don't know where to begin.

    Wrong again. The fact that there are millions of high bandwidth connections on much less secure systems (like all of those home broadband users who don't know that an Internet exists outside of email and the web) make it much less of a problem to me than if I was securing critical data. You are less likely to be carjacked if you park in a large parking lot.

  • At the risk of being a mere "Me too!" post I have to agree with the previous poster. Nothing very earthshattering in this story.

    Basically he's saying that it's o.k. to hide a your valuables as long as you lock the door too. True enough, but I'm not sure it required x pages of commentary to explain.

    Our version of "Security through obscurity" involves keeping our internal network separate from our web server. People can hack our webserver all they want to -- it's not even in the same physical building or on the same network (physically or logically) as our internal LAN. Maybe they'll deface our website, but we can restore that quickly enough and it's mostly brochureware anyhow so no big whoop.

    We're such a "low interest" target that most real hackers wouldn't waste their time with us. Nobody's going to boast about hacking our site. The script kiddies can't do any real damage -- as above the website is easily restored and of no real consequence to our daily operations.

    Naturally we have the usual complement of firewalls, passwords, encryption and IDS to keep our important stuff safe.

    Security through obscurity is fine; as long as you're not relying only upon the obscurity to provide your security. I just summed up his entire article in one sentence.

    -Coach-

  • Security through Obscurity obviously works. Back in the day, people though that it was a good idea to give kids sexual education, and tell them about things like STDs and contraception. That just inspired them to go out and have sex. Luckily, saner heads have prevailed, and now that we aren't teaching kids anything about sex, they aren't going to do it. If you don't put the ideas in their heads, they won't have sex! If your security is based on arresting anyone that knows anything about cryptography, then it's unbreakable. Q.E.D. Like, Woe
  • by kidblast ( 413235 ) on Monday July 23, 2001 @01:39PM (#66874)
    This was written in 1853 by Charles Tomlinson, and is only an excerpt of the the treatise, but it shows that people recognized that 'security' trough obscurity was not really security at all, way before the digital age.

    A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and already know much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lockpicking long before locksmiths discussed it among themselves, as they have lately done. If a lock -- let it have been made in whatever country, or by whatever maker -- is not so inviolable as it has hitherto been deemed to be, surely it is in the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged, that an acquintance with real facts will, in the end, be better for all parties.

    Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear -- milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased.

    ...The unscrupulous have the command of much of this kind of knowledge without our aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom. We employ these stray expressions concerning adulteration, debasement, roguery, and so forth, simply as a mode of illustrating a principle -- the advantage of publicity. In respect to lock-making, there can scarcely be such a thing as dishonesty of intention: the inventor produces a lock which he honestly thinks will posess such and such qualities; and he declares his belief to the world. If others differ from him in opinion concerning those qualities, it is open to them to say so; and the discussion, truthfully conducted, must lead to public advantage: the discussion stimulates curiosity, and curiosity stimulates invention. Nothing but a partial and limited view of the question could lead to the opinion that harm can result: if there be harm, it will be much more than counterbalanced by good.
  • Biometric information is not secure unless you trust the client sending you the biometric information. The packet representing your fingerprint, or retina scan or whatever, is vulnerable to man-in-the-middle attacks.

    Biometrics can help - think of them as a very long password that can't be forgotten. But they can be stolen.
    --
  • One of the things I've noticed in general terms is: the more common a security device is, the more people know how to defeat it. This is true of any form of security, both cyber and physical.

    There are many examples of this in the real world. Several methods have been developed to open Masterlock brand pad locks, not because they're inferior but because they're popular. The same goes for standard deadbolts, they can be opened in seconds with a vibration pick. And in the electronic world it's the most popular mail servers and web servers that get hacked. The point is that any thief is going to research the most common methods of protection and ways around them.

    This validates the idea of security through obscurity in my mind. Once you've got some good, fundamental, popular security set up, throw something different into the mix. Something no one is going to expect, something that is not necessarly as effective as it is unique.

  • Most security is like when i was younger and owned a bycicle. I would lock it up so the casual ding dong wouldn't walk off with it. Most people didnt walk around with bolt cutters to rip off my bike. Why? Most people are honest. Now there are some where IF the oportunity arose they would swipe my bike (ie i left it unlocked).

    how does this relate. its simple. Most ding dongs (aka skript kiddies) are just snorking someone elses keys and seeing if the key fits the lock. If it does they got a bike (or data). Now the ones that scare me are the ones that take a piece of metal twist it around and pick the lock then lock it back up and walk of and wait for a better time... The lock is like the obscurity it adds one more thing MOST people dont bother doing (ie bring along a bolt cutter). Its the ones that bother you should worry about.

  • This depends on the kind of security you are talking about. People that decry security through obscurity are usually talking about trusting an obscure algorithm or method. They recommend that you instead act as though everyone knows what algorithms and methods you use, and that the only thing you trust to be obscure is your password (or your public key, or whatever).

    The whole point of this is that most real world systems need to be distributed to a bunch of people, not all of whom can be trusted. Especially if you are developing a security system that's intended to be released to the public at large (a commerical software product, or open source system), you should expect that eventually the script kiddies will get their hands on the algorithm that has been distributed. And then once people figure out the algorithm, through social engineering or whatever, you're screwed, and everyone's security disappears. At this point, getting the security back means getting a whole new algorithm together, writing new code, and more or less redoing everything from scratch.

    If you have a security system whereby the security only depends on the password or the key, you don't have to worry about social engineering so much. If someone gets their hands on the password, it's not such a big deal to come up with a new one. In fact, you can distribute your security system far and wide, and everyone can use different passwords or keys with almost 0 extra effort. If anyone's security is breached, they can come up with a new password or key with almost zero extra effort.

    That's the big advantage of ditching security through obscurity. 1) If you do have a security breach, it's easy to fix, and 2) Lots and lots of people can use the same algorithm (which was pro'lly hard to come up with) by doing a minimal amount of work (coming up with a password or key), and they won't all be able to breach each other's security.

  • by DNS-and-BIND ( 461968 ) on Monday July 23, 2001 @12:56PM (#66886) Homepage
    Hm, I had always thought that "security through obscurity" meant "hiding among the herd". You know, the concept that "we won't be hacked, we're just some homey university tucked away on some dusty corner of the internet, and hence, we don't need to do anything about security". This attitude was once rather widespread, but I don't think it's what the article is talking about. Rather, the article talks about intentionally obfuscating services on your network, which in my experience is far harder for the sysadmins and users to deal with than it is for the system crackers.

  • by asdfdf ( 463985 ) on Monday July 23, 2001 @12:56PM (#66887)
    I think the trouble is not having security through obscurity, but gaining a false sense of security thru obscutity.

    It is the thinking that your server is secure and untouchable that is the problem

Almost anything derogatory you could say about today's software design would be accurate. -- K.E. Iverson

Working...