Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
X Bug Security

23-Year-Old X11 Server Security Vulnerability Discovered 213 213

An anonymous reader writes "The recent report of X11/X.Org security in bad shape rings more truth today. The X.Org Foundation announced today that they've found a X11 security issue that dates back to 1991. The issue is a possible stack buffer overflow that could lead to privilege escalation to root and affects all versions of the X Server back to X11R5. After the vulnerability being in the code-base for 23 years, it was finally uncovered via the automated cppcheck static analysis utility." There's a scanf used when loading BDF fonts that can overflow using a carefully crafted font. Watch out for those obsolete early-90s bitmap fonts.
This discussion has been archived. No new comments can be posted.

23-Year-Old X11 Server Security Vulnerability Discovered

Comments Filter:
  • Many eyes... (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @11:17AM (#45897705)

    ...looking elsewhere.

    • Re:Many eyes... (Score:3, Insightful)

      by i kan reed (749298) on Wednesday January 08, 2014 @11:18AM (#45897725) Homepage Journal

      The real trick of the "With enough eyes all bugs are shallow" is that the function for "enough eyes" is exponential with respect to lines of code, and open source projects don't actually hit it.

      • Re:Many eyes... (Score:5, Insightful)

        by grub (11606) <slashdot@grub.net> on Wednesday January 08, 2014 @11:28AM (#45897829) Homepage Journal
        "Many eyes" is bogus, "the right eyes" is more appropriate.
        • by i kan reed (749298) on Wednesday January 08, 2014 @11:48AM (#45898043) Homepage Journal

          Also bologna. There isn't such a thing as bug-spotting super powers. The most reliable way to detect bugs is testing, of multiple stripes, unit testing, regression testing, all the testing(and I'm a developer, so I detest QAs)

          • Re:Many eyes... (Score:5, Informative)

            by garyebickford (222422) <gar37bicNO@SPAMgmail.com> on Wednesday January 08, 2014 @12:51PM (#45898703)

            Actually it was shown back in the late 1970s that it is essentially impossible for 'black box' testing to discover more than about 30% of the bugs in a sufficiently large code base. It's based on the NP-complete problem of following all possible variations of the branches using all possible combinations of input, both valid and invalid. It's fairly easy to build a one page program that can not effectively be completely tested. It was also shown that, given good programming practice, roughly 70% of the bugs are built into the design (before a line of code has been written). Then, finally, a significant number/percentage of bugs are of the sort where it's a judgement call whether it's a bug or a feature.

            Source: I used to run a Software Quality Assurance Workshop for my then-company, and did the research. A few programming practices have changed, and the repertoire of automated tools has greatly increased in both quantity and sophistication, but average program size and the list of asynchronous externalities has ballooned by two or three orders of magnitude, so there we are.

          • by NotQuiteReal (608241) on Wednesday January 08, 2014 @12:57PM (#45898773) Journal
            "Good" test data is often an issue... in this case, I am sure your test suite includes some "carefully crafted fonts", right?
        • by Warbothong (905464) on Wednesday January 08, 2014 @12:38PM (#45898541) Homepage

          "Many eyes" is bogus, "the right eyes" is more appropriate.

          In this case "the right eyes" are robotic.

      • by hairyfeet (841228) <bassbeast1968@gm ... com minus distro> on Wednesday January 08, 2014 @12:31PM (#45898485) Journal

        In reality "many eyes" is a myth because for "many eyes" to work you'd need 1.- Eyes willing to look at the ENTIRE code, since no code is used in a vacuum, 2.- those eyes have to have the years of experience in low level coding so as to be able to even spot the bug, and 3.- Those eyes have to be willing to do keep checking because new releases keep coming and with them new bugs.

        Anyone can do basic math and see how "many eyes" simply cannot work and I'd bet my last buck that if you looked at the logs you'd find the majority of code? Not being looked at by anybody but the guys actually writing the thing. Being FOSS really only gives you ONE major advantage and that is that nobody can just pull the plug, if you need an old version? You can DIY or pay somebody to do it for you. But security wise? Nope, sorry, because OSes are some of the most complex software on the planet and even Torvalds can't tell you with 100% certainty what goes on and what is called when you launch a piece of software, its just too complex with too many interactions.

        • by garyebickford (222422) <gar37bicNO@SPAMgmail.com> on Wednesday January 08, 2014 @12:58PM (#45898789)

          Having never been a significant C coder (I skipped that phase), I'll argue that by my observation the vast majority of problems would be eliminated if C programs were incapable of buffer overflows. This is less simple than it seems. It would require not only some language features, but library changes, and would slow things down (imperceptibly?).

          There is no reason an application developer should ever encounter a segfault in a modern language. Many of today's languages are essentially immune to this problem, except when there is an error ... wait for it ... in the underlying C implementation of the compiler or runtime environment! :D

          • by Obfuscant (592200) on Wednesday January 08, 2014 @01:24PM (#45899085)

            It would require not only some language features, but library changes, and would slow things down (imperceptibly?).

            Having dealt with some fascinating FORTRAN code that ran perfectly under one compiler and failed with horrible segfaults under another, I can approve of languages that include bounds checking at execution time -- as long as it can be disabled when desired.

            The specific example is a bit of FORTRAN that was processing input parameters from a file, parsing lines of text for colon delimited parameter/value pairs. Ran fine under one compiler (gfortran, as I recall), but died every time when compiled with PGI. The programmer had ignored the case of COMMENTS in the text where no colon was found, and was trying to copy the parameter name from "start of string" through "colon-1" to another variable. No colon, the index is 0, so copying 1 through -1 is, well, a problem. One FORTRAN library caught the invalid parameter and silently ignored the operation, the other passed it on to memcpy directly.

            Now, in this case, there was no real speed penalty in checking the input parameters by the library, and it really was the programmer's fault for not checking the return from index. But as a general library function, it could be a serious speed penalty especially when a good programmer already includes a test, and certainly if this is in code that is already a bottleneck.

            There is no reason an application developer should ever encounter a segfault in a modern language.

            That I disagree with. A segfault is just another error message that shows that something is not being done properly. It would have saved me an ENORMOUS amount of time had the original programmer been forced to write proper code before distributing it to the world and I had to debug why it was failing. I mean, it was my mistake for assuming that someone could parse text properly, but that assumption was made because the code worked even when it was obvious it shouldn't. I mean, it doesn't fail using compiler X so it must be valid, let's look somewhere else for the error.

            What SHOULD be a feature of every compiler/runtime library is a switch that says "no runtime bounds checks, please". That would allow compiling code to run as fast as possible (as is required for modelers, e.g.) in production. But then another switch to say "check and report everything" for the initial test runs so that bugs can be found and eliminated more easily. The option of silently ignoring really stupid input parameters to a library call should go away. Crash and burn, or bitch and moan, ok, but "I'll ignore your stupidity so you never learn how to be a better programmer and can show the world how bad you are", no.

            • by garyebickford (222422) <gar37bicNO@SPAMgmail.com> on Wednesday January 08, 2014 @02:47PM (#45900135)

              Long ago I worked in a Pascal dialect (for the Perq workstation) that included several systems programming extensions. One was that ability to grab a block of memory as raw data, then work within that block to create and manipulate named variables as usual with full protection by the compiler. I don't recall the other extensions.

              I still think that for the vast majority of the code in most applications, the performance impact of runtime checking is minimal. So it's feasible to isolate the very few components where this is not the case, squeeze them down to the absolute minimum size and complexity, so that those pieces can be thoroughly vetted and maybe even 'proved'.

              Running in 'strict' mode is something I do during the entire dev and test cycle, then *usually* turn down for release - sometimes it has been beneficial to run internal-use programs (cron jobs) in mostly-strict, verbose logging mode to assist in debugging two years later when nobody knows how it works any more.

              I'm thinking that a smart compiler or maybe a runtime profiler might be able to figure out where runtime bounds checks are appropriate and where they are not, to a great extent. So maybe the default would be checks, with an option to turn off for particular variables (at run time).

              A related question comes from the principal of web programming, "Be strict with output, lenient with input" - a web browser should be as correct as possible with everything it puts out, and try to figure out what it receives to do something sensible with it. I'm not sure this is still considered the 'right way', as it tended to encourage (or at least not discourage) a lot of bad things.

          • by Rhacman (1528815) on Wednesday January 08, 2014 @03:17PM (#45900423)
            C is a very low level programming language. If you ever look at the assembly listing for a C program you will generally find that each line of code maps to a relatively short sequence of assembly instructions. This virtue of C is what makes it so attractive for its original (and continued) use as a tool for writing operating systems, OS drivers, embedded systems, or anything where the developer needs or desires fine control of exactly what operations will be performed. Adding bounds checks lessens that control and for many applications where C is an appropriate language choice, would have a very real performance impact.

            That said, many C compilers, debuggers, and code analysis tools (such as cppcheck as mentioned in the summary) offer features to detect memory access violations (and other types of bugs) during development and testing but without adding permanent run-time checking to release builds.
            • by garyebickford (222422) <gar37bicNO@SPAMgmail.com> on Wednesday January 08, 2014 @04:21PM (#45900989)

              Yes. I've basically argued that C should not be used for application programming in general - device drivers, kernels, maybe some other high performance OS tasks, and the occasional small high performance functions in larger programs.

              I can further argue that the extent to which even those domains need to be in a low-level language like C at present is really a testament to the limitations of compilers - IMHO it is high time to apply AI and machine learning techniques to compiler design and code translation. Watson could do some interesting things with language processing and a good knowledge base. In some cases the top-level symbolic program design could go right to custom silicon.

              It's significant that the 'infamous' APL, a very highly abstracted mostly-functional interpreted language that looks at everything pretty much as arrays, was often as faster at doing things like matrix inversions faster than most compiled implementations in other languages. This is because the interpreter did almost no work, and the underlying code for a given function could be tuned to the very restricted domain that it was operating in, in the assembly language. I believe there was even a microcoded APL interpreter, which would basically make an APL virtual machine.

        • by ewibble (1655195) on Wednesday January 08, 2014 @01:16PM (#45898981)

          Many eyes does work, so much in that it helps a bit, same with static analysis it helps but not perfect and just because you have run your tool over your code does not mean you are safe.

          FLOSS give you the following:
          1. an independent programmer may have looked at it.
          2. nobody can pull the plug.
          3. If it doesn't do what you want you can add it. My favorite.
          4. When a bug does occur they don't generally try to hide the fact.

          FLOSS is no way a guarantee of adequate code, any idiot can write start a project, but from what I have see of closed source code, that is definitely no better.

          E.g. utility function to move a file:

          char buff[256];
          sprintf(buff, "mv %s %s", src, dst);
          system(buff);

          problems ,buffer overruns and moving the file "; rm -rf /" can be problematic. And its was not just a one off either.

          If you are looking for guarantees about code quality just because you are FLOSS you are going to be disappointed.

        • by Tough Love (215404) on Wednesday January 08, 2014 @02:08PM (#45899661)

          All you said is that sometimes we don't have enough eyes on the code. You failed to substatiate your thesis that "many eyes" is a myth. This particular bug was discovered precisely because we got some new eyes looking at the code, buffed up with a dose of technological superpower. The X11 project has traditionally been a rather small project with significant barriers to entry like cvs commit access owned jealously by a small club. It's somewhat better after being pried loose from Open Group's stultifying domination, but its still a small project relative to its importance. More eyeballs are always better.

    • by MtHuurne (602934) on Wednesday January 08, 2014 @12:36PM (#45898531) Homepage

      Just because a lot of people use the compiled product, doesn't mean a lot of people read the source code. One of the X developers had a presentation slide that read "three people on this earth understand X input", followed by a slide "really wish I wasn't one of them" (video [youtube.com]).

      It does really help though to have multiple developers prod at your code. Compiling it with different compilers and for different CPUs and operating systems will unearth bugs. Using it in different scenarios will trigger bugs. Running different static code checkers will find bugs (like the one from TFA). And having people read the code and ask "why do you do that there, it seems weird" will often point to bugs.

      So many eyes certainly help code quality, but a lot of code doesn't get all that many eyes.

  • by uglyduckling (103926) on Wednesday January 08, 2014 @11:17AM (#45897711) Homepage
    Amazing how an automated tool can spot something like this after so many years.
    • Re:scary (Score:3, Insightful)

      by Anonymous Coward on Wednesday January 08, 2014 @11:29AM (#45897835)

      Given that you need to be using obsolete 90s bitmap fonts for this to be an issue, and that X11/X.org is never run as root, I'm not sure that "scary" is the word for this (there's a reason it hasn't come up before in the 23 years since it was introduced).

      Nonetheless, I'll be upgrading my X.org package just for thoroughness.

      • Re:scary (Score:5, Insightful)

        by buchner.johannes (1139593) on Wednesday January 08, 2014 @11:43AM (#45897987) Homepage Journal

        Given that you need to be using obsolete 90s bitmap fonts for this to be an issue, and that X11/X.org is never run as root, I'm not sure that "scary" is the word for this (there's a reason it hasn't come up before in the 23 years since it was introduced).

        Correct in principle, except for two remarks:

        • X runs as root, and has always. Just like getty.
        • If you craft a new bitmap font, running "xset fp+" as a user has the potential to gain you root privileges.

        So yes, not "scary". Just a critical security bug.

      • by hawkinspeter (831501) on Wednesday January 08, 2014 @11:54AM (#45898093)
        I'm running on Ubuntu and X is run as root. I'm just glad that the internet servers I set up don't run X.
    • by Bill, Shooter of Bul (629286) on Wednesday January 08, 2014 @11:39AM (#45897949) Journal

      Not amazed at all. Tools are much better at detecting these kinds of bugs than humans, with out limited stack space. And as time goes on, we build better tools. I'm not really surprised at all that humans aren't spending their time poring over the intricacies of an old font loading section.

      Especially not surprised that people aren't looking for local privileged escalation vulnerabilities.

      Also not surprised as X's security model has been known to be flawed for years.

      http://it.slashdot.org/story/13/12/31/2127243/x11xorg-security-in-bad-shape [slashdot.org]

    • by Unordained (262962) <unordained_slash ... @pseudotheos.com> on Wednesday January 08, 2014 @03:45PM (#45900653) Homepage

      Amazing that when they run this kind of automated tool on a project of this importance and breadth, this is ... the only vulnerability it found?

      This doesn't invalidate "many eyes" at all (as some are claiming here) -- the fact that a bunch of reviewers didn't find this one bug is unfortunate, but if "many eyes" had really failed, I would have expected automation to find dozens or hundreds of bugs.

  • by Anonymous Coward on Wednesday January 08, 2014 @11:20AM (#45897747)

    When was the last time you installed a "specially crafted" bdf font from anywhere?

    There are *much* worse actual security problems than this one, which in practice, wasn't much of a problem in its day several decades ago, and isn't a problem now...

    What's good is that the tools keep improving, and exposing problems...

    I sure wish Slashdot's editors would actually apply their brains to submissions, rather than cluttering up slashdot with things that aren't important; there will be security reports that actually matter for people to pay attention to....

  • Dangerous function (Score:5, Informative)

    by jones_supa (887896) on Wednesday January 08, 2014 @11:24AM (#45897793)

    There's a scanf used when loading BDF fonts that can overflow using a carefully crafted font. Watch out for those obsolete early-90s bitmap fonts.

    And watch out for scanf(). There's a reason Microsoft brought scanf_s() and others [microsoft.com], which the official C11 standard adopted later too.

    • by Viol8 (599362) on Wednesday January 08, 2014 @01:00PM (#45898805)

      The scanf() suit of functions are pretty horrid regardless of security issues. They never do quite what you expect and have endless little quirks that frankly are just a PITA. 99% of the time its a lot easier to roll your own parsing code than get *scanf() kicking and screaming to do what you want , and with C++ you have streams anyway. Its a pity they weren't just put of their (our?) misery years ago and dumped from the C standard altogether.

  • by Red_Chaos1 (95148) on Wednesday January 08, 2014 @11:27AM (#45897823)
    ...of the specifics, but can someone tell me why it's even possible for something like a fucking font to cause a security issue? I'm not a coder, it's not something I can wrap my head around. I can sometimes get the gist of what a bit of code is doing when I look at it, but that's beside the point. It just seems to me so many things that should not be able to pose a security risk somehow get manipulated into being such risks, and it just blows my mind how it's even possible.
    • by mlts (1038732) on Wednesday January 08, 2014 @11:38AM (#45897921)

      In 1991, buffer overflows were just becoming to be an issue when it came to security. Back then, a lot of X servers came with no security, so any client could attach to the screen (no xhost or MIT magic cookie authentication.) Back then, the goal was to get functionality working in the first place. If you wanted a word processor, you had vi in an xterm, or fire up Xemacs. The only word processor would have probably been a variant of Wordperfect or possibly FrameMaker, and those were mainly living on the NeXT platform.

      The X11 font bug is obscure enough to not be something that an attacker would be able to easily use. It is still a hole, but it has limited use, because to use it, one would have to have access as a user (unrestricted by policies like AppArmor or SELinux), and access to the X server's font path. This is about as hard as trying to place a ~user/ls in hopes that root runs something in the current directory over /bin/ls.

    • by SirGarlon (845873) on Wednesday January 08, 2014 @11:38AM (#45897923)

      The short answer is that carelessly written code anywhere in the system can create a vulnerability. A font needs to be loaded into memory, and in this case the code that loads it makes it possible to stick portions of the font into a part of memory where it doesn't belong. So if the "font" is actually a set of data constructed by the attacker, it can include an executable program that runs when the font is loaded.

      Back in 1991, the idea that someone would ever want to do this did not enter the imagination of a typical programmer.

    • by ledow (319597) on Wednesday January 08, 2014 @11:38AM (#45897929) Homepage

      You allocate 100 bytes on the stack for a string.

      The file you are reading a string from contains a string with more than 100 bytes of text before it's closing "NULL" (\0) character. The program reads in the 100 bytes and then, because the programmer didn't tell it to check or to stop (in this instance), it keeps going.

      This puts whatever is in the file into whatever is NEXT TO the place you were storing the string. Often this is harmless data that happens to be near the string but, because of the nature of C and just programming in general, if you don't have appropriate protections, it COULD write over "the stack" (which happens to contains the memory addresses of where the code has to go next). As such, with lots of clever manipulation, an absence of checks and an absence of various security technologies, loading anything even as harmless as a text file, or font, or anything in a packet from the net could result in abitrary code execution as the user.

      In this case, the user is root.
      In this case, the overflow occurs but it's not yet been demonstrated that you can do anything dangerous with it (i.e. execute code).
      In this case, protections like DEP and stack-checking actually block the attack and just make the program crash.

      In ALL cases, if the programmer is awake and just checks ALL input that could come from an untrusted source, the question is moot.

      • by TheLink (130905) on Wednesday January 08, 2014 @12:15PM (#45898319) Journal

        And I've long seen that as a stupid design- mixing addresses and data in the same stack. You don't have to do that.

        It's funny how Intel/AMD make CPUs with billions of transistors and yet we are still mixing data and program addresses on the same stack.

        If you have separate stacks for parameters/data and return addresses, the attacker could clobber other data with data, but the program would still be running its intended code instead of arbitrary code of the attacker's choice - so it's more likely to throw errors or crash rather than get trivially pwned.

        Keeping separate stacks might even help with CPU performance - when you know that one stack always contains return addresses it could be easier to do optimization tricks - prefetching, cache prioritizing etc.

        Of course you will still be able to exploit certain programs by overflowing and overwriting other parameters - for example a program ends up seeing "OK" in a parameter instead of "NO" and so it does something differently. But hackers won't be able the other common stuff they do nowadays.

    • by SuricouRaven (1897204) on Wednesday January 08, 2014 @11:39AM (#45897939)

      In a letter: C.

      It's a language that works very close to the metal. That allows programers to squeeze the most out of the hardware - which matters now, and mattered a lot more 23 years ago. It's fast, it's lean, it'll let you run a fast-paced 3D(ish) FPS like Doom on a 486* with fifty monsters lobbing fireballs at the player. The down-side to this is that it's very easy for a programmer to screw up - you need to be aware of exactly how everything fits together in memory and always be thinking about exceptions and failure scenarios, otherwise this happens.

      The exact problem is a buffer overflow: The font loading code allocates n bytes for some information, on the assumption that any sane and standards-compliant font will have at most n bytes there. A maliciously crafted font can have more than that n - and the code, upon reaching that limit, just carries on reading. The extra ends up somewhere, likely in a section of memory that was supposed to contain executable code, resulting in a code execution vulnerability.

      A good part of the history of programming languages involves trying to find ways to restrict the capabilities of a language just enough to stop a programmer from making a mistake of that nature, but without restricting them so much that capabilities or performance suffer.

      *I understand it could run on a 386, but that was pushing things a bit so you'd have to run it with reduced viewing size.

    • by jones_supa (887896) on Wednesday January 08, 2014 @11:39AM (#45897943)

      Any time you load some file format there is a risk of unexpected behavior happening due to buffer overflows. I guess that it's ultimately the von Neumann architecture computer that we can blame (mixing code and data on adjacent memory areas). That, and using unsafe C functions...

      Even still, we should be able to do better. I agree that it's extremely cringe-worthy that a simple font can compromise the security of the system.

    • by wildstoo (835450) on Wednesday January 08, 2014 @11:42AM (#45897973)

      Buffer overflow. [wikipedia.org]

    • by Alioth (221270) <no@spam> on Wednesday January 08, 2014 @11:44AM (#45897997) Journal

      That's alright - it won't be easy to understand if you're not a coder. In fact many coders won't understand it - unless you've done quite a lot of system level C code or possibly assembly language, many categories of these exploits will look a bit like black magic.

      But in short, many categories of what should be pure data being used to exploit a security hole are things like buffer overflow exploits. A system level program written in C allocates some memory for a purpose, and due to a bad or missing length check, someone can put more data in there that fits. As the data runs off the end of this allocated space, it can end up overwriting something else. Consider a small buffer that's allocated on the stack. The stack also contains where in memory the program should return to after the subroutine it's running has ended. If you find that the code that fills this buffer has a bad or no length check, you can put data larger than the buffer in here and overwrite the routine's return address and make this return address be somewhere in your buffer (and contain more executable code). When the routine finishes, it gets the return address you put there instead of what should be there and your exploit code gets executed instead.

      There's many defences against this at the system level these days (such as non-executable stacks, address space randomization etc) but ways have been found to get around some of these defences.

    • by hawkinspeter (831501) on Wednesday January 08, 2014 @11:47AM (#45898027)
      Imagine, if you will, a car that has all the latest security features conceivable (biometrics up to and including your eyeballs). Also, imagine that there is a flaw with the radio aerial that enable someone to easily unscrew it and gain access to the engine compartment. By getting to the engine compartment, you can then exploit an electrical flaw to start the car and open the doors.

      Now, why would it be even possible for an aerial flaw to allow your car to be stolen?
    • by EmperorArthur (1113223) on Wednesday January 08, 2014 @11:54AM (#45898095)

      Quick guide to binary files. Mostly from my and others work on game saves.
      Almost all of them store the size of an array right before the data. This is even true for things like null terminated strings. What gets fun is when you have an array of structs, which then holds an array. Most of those are read in using standard for loops, but an off by one error is still possible. Another (admittedly stupid) possibility is using a string read function that looks for the '\0' character while working with a fixed sized array instead of a true string object. Actually it's really easy for a malformed binary file to have a program attempting to read in gigabytes of data, or for a program that's not perfect to interpret some random number as an array size.

      About font files:
      Remember that font files tend to be ridiculously complicated. The new ones at least actually run code in a special virtual machine. Given everything that I've said about binary files an just how many Java/Flash/Javascript VM flaws we've seen it's not really surprising.

      About X:
      Hell, X11 is so complicated I wouldn't be surprised if an arbitrary function could load random fonts via a function call that no toolkit ever uses. At that point you're talking about a normal function with any of the normal error cases.

      Pick your poison. There are many possibilities for errors.

      The amazing thing is that cppcheck caught it. That means it had to be some static problem with the code.

      cppcheck says this code is fine. Try to see why I disagree:

      char f(int i,char * data)
      {
              char array[6];
              array[i] = data[i];
              return array[0];
      }

    • by bluefoxlucid (723572) on Wednesday January 08, 2014 @11:59AM (#45898145) Journal

      In generic technical terms...

      Program flow is controlled by instrumentation data on what is called the "stack". The stack grows up or down; up-growing stack attacks are somewhat more esoteric, but very doable. Down-growing stacks are readily-understood, which has lead to many people blaming the direction of stack growth on x86 for its vulnerability to these attacks (they're wrong). We'll use down-growing stacks for our explanation here.

      Each function sets up, from right to left (high address to low), return address, stack frame pointer, stored registers, and then local function variables. Local variables, as an implementation detail, are stored on the stack. Array variables are stored as a range, so if you allocate an integer of 4 bytes and a character array of 5 bytes may look like [CCCCC][IIII][SFP.][RETP]. Remember, the integer is allocated first; the character array allocated second. In reality, %esp just has the total aligned or unaligned (it doesn't matter from a compatibility standpoint, but it's specified in the binary standard) size of the stack variables subtracted from it. alloca() does the same thing, because malloc() is expensive (takes too much CPU time) and requires later free()ing the RAM while alloca(n) just subtracts n from %esp.

      If a program loads data into a pre-allocated buffer of, say, 75 bytes, or if it calls alloca() to allocate a stack-local temporary buffer of 75 bytes, you can overwrite other stuff by writing more than 75 bytes. Above, if you wrote 17 bytes into the 5 byte character array, you would overwrite SFP and RETP. So if a program assumes an input field is under 75 bytes, or if it reads a numeric value from input and allocates that much, and then reads more than that, it can overwrite control data. This may happen if, for example, the program allocates a 75 byte buffer and then accepts that a data file says "FIELD X IS 255 BYTES LONG" and copies 255 bytes into it, or if it accepts "FIELD X IS 10 BYTES LONG" and allocates 10 bytes, then copies an ASCIIZ string (a bunch of bytes terminated by a 0 byte--the length is everything up to the 0).

      In any such case, the overflow can spill into RETP. If you specially craft it to align a repeating set of values containing an address on the stack somewhat above the RETP, then dump in a bunch of AAAAAAAAAAAAAAA characters (inc %eax on x86, essentially nothing), then dump in a piece of program code, the function will return to the program code you just wrote into the stack. More directly, it will probably land sloppily into your NOP slide, increment an unimportant counter repeatedly, and then begin running your code.

      So there you have it. A program copies a big piece of data into a little place next to instrumentation data, overwrites instrumentation data, and the program does unexpected things when the CPU tries to use that instrumentation data to direct program flow. If you're very careful about it, you can write specific instrumentation data in and add code to the program, and the program will execute your code because it's directed to return to it instead of to the previous call point.

    • by Derek Pomery (2028) on Wednesday January 08, 2014 @12:05PM (#45898203)

      You might also find this article interesting.
      http://hackademix.net/2010/03/24/why-noscript-blocks-web-fonts/ [hackademix.net]

      Personally, I find stuff like web fonts a bit more worrying since the content is served remotely, unlike installing this font, which you'd need root to do in the first place.

    • by Kjella (173770) on Wednesday January 08, 2014 @12:08PM (#45898233) Homepage

      Well, the first thing you should understand is that "code" and "data" are entirely human distinctions, for a computer they're all zeros and ones. Computers have an instruction pointer which points to the memory address of the next instruction it's going to perform. If an attacker can replace the contents of that memory location, it ceases control of the system. Let's take a very basic example:

      Program:
      1. Load file into memory from $base to $base + $size
      2. Read $offset from file
      3. Read $value from file
      4. Write $value to position $offset in the file.

      That's what the code think it does, at least. But what if there's is no bounds checking and $base + $offset > $base + $size? Now you're writing outside the file to some other place in memory, like for example where the instruction pointer is. You trick the software into writing your data to a memory location it shouldn't be and the data gets executed as machine code. Of course this is absolutely brain dead code that will write anything to anywhere in memory and I haven't discussed any of the countermeasures that make this difficult, but that's the gist of it.

    • You can craft your data (it doesn't have to be a font, well, in this specific case it does, but the technique in general doesn't care what format) so that the function loading data will keep loading data past the end of the chunk of memory it allocated for that data. And if you keep on going, you start overwriting other bits of data, such as the address where code execution should resume once the loader function finishes. Now you just replace that address with one pointing to your "data", and make your data actually be code.

      Volia, you now have this program running random code you included as part of a "data" file, which can do anything that program can do with it's given credentials. This is a buffer overflow exploit. [wikipedia.org]

  • by Burz (138833) on Wednesday January 08, 2014 @11:34AM (#45897889) Journal

    It was designed assuming X11 (and Linux itself) had big security holes to begin with. [qubes-os.org]

    In fact, after acclimating to the Qubes desktop architecture the whole monolithic kernel + X server arrangement looks like a raft full of holes waiting to exploited. Both the X11 layer *and* the Linux kernel need to be demoted to providing features only, not relied upon for overall system security.

    • by Junta (36770) on Wednesday January 08, 2014 @12:15PM (#45898317)

      Basically Qubes OS is as likely to be affected as a modern linux distribution. Xorg does not run with special privilege and thus the scope of the attack is things for said user.

      While that means the underlying integrity of the system and other users is intact, it does little to comfort the vast majority of desktop users, as xkcd succinctly expresses: http://xkcd.com/1200/ [xkcd.com]

      • by Burz (138833) on Wednesday January 08, 2014 @12:35PM (#45898513) Journal

        Incorrect. An exploited Qubes X11 has control over only the apps and data assigned to the exploited Xen domain; it would remain blocked from any baremetal administrative functions.

        An exploited baremetal Linux/X11 has control over user I/O for everything done by the exploited user, so they are SOL as soon as they try to perform a system-wide admin function.

        Keeping sensitive data under different user accounts would provide virtually no protection for threat models that apply to typical desktops.

        • by Junta (36770) on Wednesday January 08, 2014 @03:06PM (#45900323)

          Through exploiting Xorg then it can likely exploit more *important* things like credit card numbers, bank account information, and so on and so forth. The likelihood is very high that the exploited X server is going to host an input of some great importance.

          If the user is very fastidious in sorting every single little thing into distinct AppVMs, then the attack surface can be meaningfully reduced. However such a fastidious user is unlikely to do activities that would cause bitmap fonts to be read in from an untrusted source.

          Qubes OS is a fascinating tool to help the careful be more effective in their effort, but the practical reality is that the people most afflicted by these attacks would not create a more secure environment in Qubes than a normal environment.

  • by fisted (2295862) on Wednesday January 08, 2014 @11:40AM (#45897957)

    -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    NetBSD Security Advisory 2014-001

    Topic: Stack buffer overflow in libXfont

    Version: NetBSD-current: source prior to Tue 7th, 2014
    NetBSD 6.1: affected
    NetBSD 6.0 - 6.0.2: affected
    NetBSD 5.1 - 5.1.2: affected
    NetBSD 5.2: affected

    Severity: privilege escalation

    Fixed: NetBSD-current: Tue 7th, 2014
    NetBSD-6-0 branch: Tue 7th, 2014
    NetBSD-6-1 branch: Tue 7th, 2014
    NetBSD-6 branch: Tue 7th, 2014
    NetBSD-5-2 branch: Tue 7th, 2014
    NetBSD-5-1 branch: Tue 7th, 2014
    NetBSD-5 branch: Tue 7th, 2014

    Teeny versions released later than the fix date will contain the fix.

    Please note that NetBSD releases prior to 5.1 are no longer supported.
    It is recommended that all users upgrade to a supported release.

    Abstract

    A stack buffer overflow in parsing of BDF font files in libXfont was
    found that can easily be used to crash X programs using libXfont,
    and likely could be exploited to run code with the privileges of
    the X program (most nostably, the X server, commonly running as root).

    This vulnerability has been assigned CVE-2013-6462

    Technical Details

    - From the X.org advisory:

    Scanning of the libXfont sources with the cppcheck static analyzer
    included a report of:

    [lib/libXfont/src/bitmap/bdfread.c:341]: (warning)
    scanf without field width limits can crash with huge input data.

    Evaluation of this report by X.Org developers concluded that a BDF font
    file containing a longer than expected string could overflow the buffer
    on the stack. Testing in X servers built with Stack Protector resulted
    in an immediate crash when reading a user-provided specially crafted font.

    As libXfont is used to read user-specified font files in all X servers
    distributed by X.Org, including the Xorg server which is often run with
    root privileges or as setuid-root in order to access hardware, this bug
    may lead to an unprivileged user acquiring root privileges in some systems.

    This bug appears to have been introduced in the initial RCS version 1.1
    checked in on 1991/05/10, and is thus believed to be present in every X11
    release starting with X11R5 up to the current libXfont 1.4.6.
    (Manual inspection shows it is present in the sources from the X11R5
    tarballs, but not in those from the X11R4 tarballs.)

    Solutions and Workarounds

    Workaround: restrict access to the X server.

    Solutions: a fix is included in the following versions:

    xorg: xsrc/external/mit/libXfont/dist/src/bitmap/bdfread.c
    HEAD 1.3
    netbsd-6 1.1.1.2.2.1
    netbsd-6-1 1.1.1.2.6.1
    netbsd-6-0 1.1.1.2.4.1
    netbsd-5 1.1.1.1.2.2
    netbsd-5-2 1.1.1.1.2.1.4.1
    netbsd-5-1 1.1.1.1.2.1.2.1

    xfree: xsrc/xfree/xc/lib/font/bitmap/bdfread.c
    HEAD 1.4
    netbsd-6 1.2.8.1
    netbsd-6-1 1.2.14.1
    netbsd-6-0 1.2.10.1
    netbsd-5 1.2.2.1
    netbsd-5-2 1.2.12.1
    netbsd-5-1 1.2.6.1

    To obtain fixed binaries, fetch the appropriate xbase.tgz from a daily
    build later than the fix dates, i.e.
    http://nyftp.netbsd.org/pub/NetBSD-daily/ [netbsd.org]///binary/sets/xbase.tgz
    with a date 20

  • by thomasdz (178114) on Wednesday January 08, 2014 @11:51AM (#45898077)

    I'm running OpenBSD on my VAX. Go ahead. Try to exploit a buffer overflow on my home VAX cluster. If you can, then you deserve a prize because you've learned VAX machine code.

  • by gmuslera (3436) on Wednesday January 08, 2014 @12:29PM (#45898461) Homepage Journal
    ... by the developers. That a bug or vulnerability is found and announced in certain moment, be in closed or open source programs, don't ensure that the bad guys (working for the NSA or other places) haven't found and been exploiting it for some time already. That the bug can be found in automated ways (in this case was static source analysis, but could be checking for undocumented open ports or sql injection [owasp.org]) makes almost certain that it could had been exploited before.
  • by Danzigism (881294) on Wednesday January 08, 2014 @12:41PM (#45898577)
    I find this interesting since most of us gave Microsoft flack for so many years because of their terrible vulnerabilities. Turns out that nearly 90% of all Windows updates are for patching security issues with the UI. That is why Microsoft is convincing admins to use Server 2012 with just Server Core and PowerShell simply because it makes the whole system more secure. Who needs more than a console anyway? If you ask me you can get plenty of work done with vim, lynx, and entertain yourself with 0verkill. ;-)
  • by johanwanderer (1078391) on Wednesday January 08, 2014 @01:27PM (#45899115)
    It's kinda funny, but all my XServers run on Windows these days, and only run once in a blue moon, so I can access that one or two stubborn applications that requires X. Not that it makes it less of an issue.

All the simple programs have been written.

Working...