Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
X Bug Security

23-Year-Old X11 Server Security Vulnerability Discovered 213

An anonymous reader writes "The recent report of X11/X.Org security in bad shape rings more truth today. The X.Org Foundation announced today that they've found a X11 security issue that dates back to 1991. The issue is a possible stack buffer overflow that could lead to privilege escalation to root and affects all versions of the X Server back to X11R5. After the vulnerability being in the code-base for 23 years, it was finally uncovered via the automated cppcheck static analysis utility." There's a scanf used when loading BDF fonts that can overflow using a carefully crafted font. Watch out for those obsolete early-90s bitmap fonts.
This discussion has been archived. No new comments can be posted.

23-Year-Old X11 Server Security Vulnerability Discovered

Comments Filter:
  • Many eyes... (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @10:17AM (#45897705)

    ...looking elsewhere.

    • Re: (Score:3, Insightful)

      by i kan reed ( 749298 )

      The real trick of the "With enough eyes all bugs are shallow" is that the function for "enough eyes" is exponential with respect to lines of code, and open source projects don't actually hit it.

      • Re:Many eyes... (Score:5, Insightful)

        by grub ( 11606 ) <slashdot@grub.net> on Wednesday January 08, 2014 @10:28AM (#45897829) Homepage Journal
        "Many eyes" is bogus, "the right eyes" is more appropriate.
        • Also bologna. There isn't such a thing as bug-spotting super powers. The most reliable way to detect bugs is testing, of multiple stripes, unit testing, regression testing, all the testing(and I'm a developer, so I detest QAs)

          • Re:Many eyes... (Score:5, Informative)

            by garyebickford ( 222422 ) <gar37bic@@@gmail...com> on Wednesday January 08, 2014 @11:51AM (#45898703)

            Actually it was shown back in the late 1970s that it is essentially impossible for 'black box' testing to discover more than about 30% of the bugs in a sufficiently large code base. It's based on the NP-complete problem of following all possible variations of the branches using all possible combinations of input, both valid and invalid. It's fairly easy to build a one page program that can not effectively be completely tested. It was also shown that, given good programming practice, roughly 70% of the bugs are built into the design (before a line of code has been written). Then, finally, a significant number/percentage of bugs are of the sort where it's a judgement call whether it's a bug or a feature.

            Source: I used to run a Software Quality Assurance Workshop for my then-company, and did the research. A few programming practices have changed, and the repertoire of automated tools has greatly increased in both quantity and sophistication, but average program size and the list of asynchronous externalities has ballooned by two or three orders of magnitude, so there we are.

          • "Good" test data is often an issue... in this case, I am sure your test suite includes some "carefully crafted fonts", right?
        • "Many eyes" is bogus, "the right eyes" is more appropriate.

          In this case "the right eyes" are robotic.

      • Comment removed based on user account deletion
        • Having never been a significant C coder (I skipped that phase), I'll argue that by my observation the vast majority of problems would be eliminated if C programs were incapable of buffer overflows. This is less simple than it seems. It would require not only some language features, but library changes, and would slow things down (imperceptibly?).

          There is no reason an application developer should ever encounter a segfault in a modern language. Many of today's languages are essentially immune to this proble

          • It would require not only some language features, but library changes, and would slow things down (imperceptibly?).

            Having dealt with some fascinating FORTRAN code that ran perfectly under one compiler and failed with horrible segfaults under another, I can approve of languages that include bounds checking at execution time -- as long as it can be disabled when desired.

            The specific example is a bit of FORTRAN that was processing input parameters from a file, parsing lines of text for colon delimited parameter/value pairs. Ran fine under one compiler (gfortran, as I recall), but died every time when compiled with PGI. T

            • Long ago I worked in a Pascal dialect (for the Perq workstation) that included several systems programming extensions. One was that ability to grab a block of memory as raw data, then work within that block to create and manipulate named variables as usual with full protection by the compiler. I don't recall the other extensions.

              I still think that for the vast majority of the code in most applications, the performance impact of runtime checking is minimal. So it's feasible to isolate the very few compone

          • C is a very low level programming language. If you ever look at the assembly listing for a C program you will generally find that each line of code maps to a relatively short sequence of assembly instructions. This virtue of C is what makes it so attractive for its original (and continued) use as a tool for writing operating systems, OS drivers, embedded systems, or anything where the developer needs or desires fine control of exactly what operations will be performed. Adding bounds checks lessens that c
            • Yes. I've basically argued that C should not be used for application programming in general - device drivers, kernels, maybe some other high performance OS tasks, and the occasional small high performance functions in larger programs.

              I can further argue that the extent to which even those domains need to be in a low-level language like C at present is really a testament to the limitations of compilers - IMHO it is high time to apply AI and machine learning techniques to compiler design and code translation

        • Many eyes does work, so much in that it helps a bit, same with static analysis it helps but not perfect and just because you have run your tool over your code does not mean you are safe.

          FLOSS give you the following:
          1. an independent programmer may have looked at it.
          2. nobody can pull the plug.
          3. If it doesn't do what you want you can add it. My favorite.
          4. When a bug does occur they don't generally try to hide the fact.

          FLOSS is no way a guarantee of adequate code, any idiot can write start a project, but fr

        • All you said is that sometimes we don't have enough eyes on the code. You failed to substatiate your thesis that "many eyes" is a myth. This particular bug was discovered precisely because we got some new eyes looking at the code, buffed up with a dose of technological superpower. The X11 project has traditionally been a rather small project with significant barriers to entry like cvs commit access owned jealously by a small club. It's somewhat better after being pried loose from Open Group's stultifying do

    • Just because a lot of people use the compiled product, doesn't mean a lot of people read the source code. One of the X developers had a presentation slide that read "three people on this earth understand X input", followed by a slide "really wish I wasn't one of them" (video [youtube.com]).

      It does really help though to have multiple developers prod at your code. Compiling it with different compilers and for different CPUs and operating systems will unearth bugs. Using it in different scenarios will trigger bugs. Running

  • Amazing how an automated tool can spot something like this after so many years.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Given that you need to be using obsolete 90s bitmap fonts for this to be an issue, and that X11/X.org is never run as root, I'm not sure that "scary" is the word for this (there's a reason it hasn't come up before in the 23 years since it was introduced).

      Nonetheless, I'll be upgrading my X.org package just for thoroughness.

      • Re:scary (Score:5, Insightful)

        by buchner.johannes ( 1139593 ) on Wednesday January 08, 2014 @10:43AM (#45897987) Homepage Journal

        Given that you need to be using obsolete 90s bitmap fonts for this to be an issue, and that X11/X.org is never run as root, I'm not sure that "scary" is the word for this (there's a reason it hasn't come up before in the 23 years since it was introduced).

        Correct in principle, except for two remarks:

        • X runs as root, and has always. Just like getty.
        • If you craft a new bitmap font, running "xset fp+" as a user has the potential to gain you root privileges.

        So yes, not "scary". Just a critical security bug.

      • I'm running on Ubuntu and X is run as root. I'm just glad that the internet servers I set up don't run X.
    • Not amazed at all. Tools are much better at detecting these kinds of bugs than humans, with out limited stack space. And as time goes on, we build better tools. I'm not really surprised at all that humans aren't spending their time poring over the intricacies of an old font loading section.

      Especially not surprised that people aren't looking for local privileged escalation vulnerabilities.

      Also not surprised as X's security model has been known to be flawed for years.

      http://it.slashdot.org/story/13/12/31/2127 [slashdot.org]

      • Re:scary (Score:5, Insightful)

        by PPH ( 736903 ) on Wednesday January 08, 2014 @11:09AM (#45898243)

        Right. And this is why its so important to have the source code available. Some argue, "Who actually looks at this stuff?" Well, here's an example of someone who did. Not in the classical sense of some aspie code geek reading it by hand. But just feed it to some automated tools and see what pops out.

        • We don't know who found the bug. If it was someone from the development team then this is equivalent to a closed source situation. Not an argument for open = better.

      • Also automated systems will find _different_ types of bugs than humans. Just like computer chess players use different strategies than human players. I anticipate that the next step will be automated systems for learning new automated bug-finding methods that humans may never have considered.

    • Amazing that when they run this kind of automated tool on a project of this importance and breadth, this is ... the only vulnerability it found?

      This doesn't invalidate "many eyes" at all (as some are claiming here) -- the fact that a bunch of reviewers didn't find this one bug is unfortunate, but if "many eyes" had really failed, I would have expected automation to find dozens or hundreds of bugs.

  • by Anonymous Coward on Wednesday January 08, 2014 @10:20AM (#45897747)

    When was the last time you installed a "specially crafted" bdf font from anywhere?

    There are *much* worse actual security problems than this one, which in practice, wasn't much of a problem in its day several decades ago, and isn't a problem now...

    What's good is that the tools keep improving, and exposing problems...

    I sure wish Slashdot's editors would actually apply their brains to submissions, rather than cluttering up slashdot with things that aren't important; there will be security reports that actually matter for people to pay attention to....

    • by NoNonAlphaCharsHere ( 2201864 ) on Wednesday January 08, 2014 @10:26AM (#45897811)
      Granted, there aren't a lot of people going to scurry off and "carefully craft" a font in an obsolete format for a new 0-day 'sploit. Actually, it's the "23-years old" and "discovered by a (new) automated test" parts that are interesting. Possibly even slashworthy.
    • by grnbrg ( 140964 )

      When was the last time you installed a "specially crafted" bdf font from anywhere?

      You don't have to. Anyone with a writeable ${HOME}/.fonts can.

      This could be really big.

    • by Moskit ( 32486 )

      > When was the last time you installed a "specially crafted" bdf font from anywhere?

      I got a very nice BDF font recently from a guy who said he can't say where he works or doesn't work. It installed very nicely, and it is signed "specially crafted" indeed! Font file also has a "Made by ANToN of eSsAy" attribution. Nice handle!

    • by digsbo ( 1292334 )
      I've loaded special bitmap fonts several times, because I'm nuts about the old 6x13 bitmap font the was the default on the old SGI Irix systems I used in the 90s (no preferable font exists at my current DPI, and I've even carried it into Visual Studio). Granted, it wasn't BDF, but there are some folks out there who make niche fonts available.
    • Last week I installed the custom BDF font contained in my HP 1670D logic analyzer which is needed to support a remote client display. There are people that still rely on the "obsolete" facilities of X11.

  • Dangerous function (Score:5, Informative)

    by jones_supa ( 887896 ) on Wednesday January 08, 2014 @10:24AM (#45897793)

    There's a scanf used when loading BDF fonts that can overflow using a carefully crafted font. Watch out for those obsolete early-90s bitmap fonts.

    And watch out for scanf(). There's a reason Microsoft brought scanf_s() and others [microsoft.com], which the official C11 standard adopted later too.

    • by Viol8 ( 599362 )

      The scanf() suit of functions are pretty horrid regardless of security issues. They never do quite what you expect and have endless little quirks that frankly are just a PITA. 99% of the time its a lot easier to roll your own parsing code than get *scanf() kicking and screaming to do what you want , and with C++ you have streams anyway. Its a pity they weren't just put of their (our?) misery years ago and dumped from the C standard altogether.

  • ...of the specifics, but can someone tell me why it's even possible for something like a fucking font to cause a security issue? I'm not a coder, it's not something I can wrap my head around. I can sometimes get the gist of what a bit of code is doing when I look at it, but that's beside the point. It just seems to me so many things that should not be able to pose a security risk somehow get manipulated into being such risks, and it just blows my mind how it's even possible.
    • by mlts ( 1038732 )

      In 1991, buffer overflows were just becoming to be an issue when it came to security. Back then, a lot of X servers came with no security, so any client could attach to the screen (no xhost or MIT magic cookie authentication.) Back then, the goal was to get functionality working in the first place. If you wanted a word processor, you had vi in an xterm, or fire up Xemacs. The only word processor would have probably been a variant of Wordperfect or possibly FrameMaker, and those were mainly living on the

    • The short answer is that carelessly written code anywhere in the system can create a vulnerability. A font needs to be loaded into memory, and in this case the code that loads it makes it possible to stick portions of the font into a part of memory where it doesn't belong. So if the "font" is actually a set of data constructed by the attacker, it can include an executable program that runs when the font is loaded.

      Back in 1991, the idea that someone would ever want to do this did not enter the imagination of

    • by ledow ( 319597 )

      You allocate 100 bytes on the stack for a string.

      The file you are reading a string from contains a string with more than 100 bytes of text before it's closing "NULL" (\0) character. The program reads in the 100 bytes and then, because the programmer didn't tell it to check or to stop (in this instance), it keeps going.

      This puts whatever is in the file into whatever is NEXT TO the place you were storing the string. Often this is harmless data that happens to be near the string but, because of the nature of

      • by TheLink ( 130905 )

        And I've long seen that as a stupid design- mixing addresses and data in the same stack. You don't have to do that.

        It's funny how Intel/AMD make CPUs with billions of transistors and yet we are still mixing data and program addresses on the same stack.

        If you have separate stacks for parameters/data and return addresses, the attacker could clobber other data with data, but the program would still be running its intended code instead of arbitrary code of the attacker's choice - so it's more likely to throw er

    • In a letter: C.

      It's a language that works very close to the metal. That allows programers to squeeze the most out of the hardware - which matters now, and mattered a lot more 23 years ago. It's fast, it's lean, it'll let you run a fast-paced 3D(ish) FPS like Doom on a 486* with fifty monsters lobbing fireballs at the player. The down-side to this is that it's very easy for a programmer to screw up - you need to be aware of exactly how everything fits together in memory and always be thinking about exception

      • by tibit ( 1762298 )

        I don't think it's the problem with the language proper, just with the hopeless standard C library. Nobody is forced to use it naked, though. Either roll your own wrapper, or use something already made like Dave Hanson's code from C Interfaces and Implementations [google.com].

    • Any time you load some file format there is a risk of unexpected behavior happening due to buffer overflows. I guess that it's ultimately the von Neumann architecture computer that we can blame (mixing code and data on adjacent memory areas). That, and using unsafe C functions...

      Even still, we should be able to do better. I agree that it's extremely cringe-worthy that a simple font can compromise the security of the system.

    • Buffer overflow. [wikipedia.org]

    • by Alioth ( 221270 )

      That's alright - it won't be easy to understand if you're not a coder. In fact many coders won't understand it - unless you've done quite a lot of system level C code or possibly assembly language, many categories of these exploits will look a bit like black magic.

      But in short, many categories of what should be pure data being used to exploit a security hole are things like buffer overflow exploits. A system level program written in C allocates some memory for a purpose, and due to a bad or missing length c

    • Imagine, if you will, a car that has all the latest security features conceivable (biometrics up to and including your eyeballs). Also, imagine that there is a flaw with the radio aerial that enable someone to easily unscrew it and gain access to the engine compartment. By getting to the engine compartment, you can then exploit an electrical flaw to start the car and open the doors.

      Now, why would it be even possible for an aerial flaw to allow your car to be stolen?
    • Quick guide to binary files. Mostly from my and others work on game saves.
      Almost all of them store the size of an array right before the data. This is even true for things like null terminated strings. What gets fun is when you have an array of structs, which then holds an array. Most of those are read in using standard for loops, but an off by one error is still possible. Another (admittedly stupid) possibility is using a string read function that looks for the '\0' character while working with a fixe

    • In generic technical terms...

      Program flow is controlled by instrumentation data on what is called the "stack". The stack grows up or down; up-growing stack attacks are somewhat more esoteric, but very doable. Down-growing stacks are readily-understood, which has lead to many people blaming the direction of stack growth on x86 for its vulnerability to these attacks (they're wrong). We'll use down-growing stacks for our explanation here.

      Each function sets up, from right to left (high address to low),

    • You might also find this article interesting.
      http://hackademix.net/2010/03/24/why-noscript-blocks-web-fonts/ [hackademix.net]

      Personally, I find stuff like web fonts a bit more worrying since the content is served remotely, unlike installing this font, which you'd need root to do in the first place.

    • by Kjella ( 173770 )

      Well, the first thing you should understand is that "code" and "data" are entirely human distinctions, for a computer they're all zeros and ones. Computers have an instruction pointer which points to the memory address of the next instruction it's going to perform. If an attacker can replace the contents of that memory location, it ceases control of the system. Let's take a very basic example:

      Program:
      1. Load file into memory from $base to $base + $size
      2. Read $offset from file
      3. Read $value from file
      4. Writ

      • Well, the first thing you should understand is that "code" and "data" are entirely human distinctions, for a computer they're all zeros and ones.

        Some computers do understand such a distinction at the hardware level. They are slightly more hassle, so we mostly don't use that kind of functionality. That is looking like a mistake.

      • That's only true for Von-Neumann architecture, but not for a (pure) Harvard architecture.

        Of course, the Harvard architecture has some issues, like not being able to store code on the same device as data, which make it impractical for normal use. And mitigating those issues recreates the possibility of the security holes that Von-Neumann machines have.
    • by segin ( 883667 )

      You can craft your data (it doesn't have to be a font, well, in this specific case it does, but the technique in general doesn't care what format) so that the function loading data will keep loading data past the end of the chunk of memory it allocated for that data. And if you keep on going, you start overwriting other bits of data, such as the address where code execution should resume once the loader function finishes. Now you just replace that address with one pointing to your "data", and make your data

  • It was designed assuming X11 (and Linux itself) had big security holes to begin with. [qubes-os.org]

    In fact, after acclimating to the Qubes desktop architecture the whole monolithic kernel + X server arrangement looks like a raft full of holes waiting to exploited. Both the X11 layer *and* the Linux kernel need to be demoted to providing features only, not relied upon for overall system security.

    • by Junta ( 36770 )

      Basically Qubes OS is as likely to be affected as a modern linux distribution. Xorg does not run with special privilege and thus the scope of the attack is things for said user.

      While that means the underlying integrity of the system and other users is intact, it does little to comfort the vast majority of desktop users, as xkcd succinctly expresses: http://xkcd.com/1200/ [xkcd.com]

      • by Burz ( 138833 )

        Incorrect. An exploited Qubes X11 has control over only the apps and data assigned to the exploited Xen domain; it would remain blocked from any baremetal administrative functions.

        An exploited baremetal Linux/X11 has control over user I/O for everything done by the exploited user, so they are SOL as soon as they try to perform a system-wide admin function.

        Keeping sensitive data under different user accounts would provide virtually no protection for threat models that apply to typical desktops.

        • Through exploiting Xorg then it can likely exploit more *important* things like credit card numbers, bank account information, and so on and so forth. The likelihood is very high that the exploited X server is going to host an input of some great importance.

          If the user is very fastidious in sorting every single little thing into distinct AppVMs, then the attack surface can be meaningfully reduced. However such a fastidious user is unlikely to do activities that would cause bitmap fonts to be read in from

  • -----BEGIN PGP SIGNED MESSAGE-----
    Hash: SHA1

    NetBSD Security Advisory 2014-001

    Topic: Stack buffer overflow in libXfont

    Version: NetBSD-current: source prior to Tue 7th, 2014
    NetBSD 6.1: affected
    NetBSD 6.0 - 6.0.2: affected
    NetBSD 5.1 - 5.1.2: affected

  • by thomasdz ( 178114 ) on Wednesday January 08, 2014 @10:51AM (#45898077)

    I'm running OpenBSD on my VAX. Go ahead. Try to exploit a buffer overflow on my home VAX cluster. If you can, then you deserve a prize because you've learned VAX machine code.

  • ... by the developers. That a bug or vulnerability is found and announced in certain moment, be in closed or open source programs, don't ensure that the bad guys (working for the NSA or other places) haven't found and been exploiting it for some time already. That the bug can be found in automated ways (in this case was static source analysis, but could be checking for undocumented open ports or sql injection [owasp.org]) makes almost certain that it could had been exploited before.
  • I find this interesting since most of us gave Microsoft flack for so many years because of their terrible vulnerabilities. Turns out that nearly 90% of all Windows updates are for patching security issues with the UI. That is why Microsoft is convincing admins to use Server 2012 with just Server Core and PowerShell simply because it makes the whole system more secure. Who needs more than a console anyway? If you ask me you can get plenty of work done with vim, lynx, and entertain yourself with 0verkill. ;-)
    • by jandrese ( 485 )
      It didn't help that for years and years the console was nigh useless on your typical Windows box. Microsoft has made big strides in improving it, but it's still clearly a redheaded stepchild in Redmond.
  • It's kinda funny, but all my XServers run on Windows these days, and only run once in a blue moon, so I can access that one or two stubborn applications that requires X. Not that it makes it less of an issue.

Keep up the good work! But please don't ask me to help.

Working...