Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

X Consortium Announces X11R6.5.1 118

cthulhubob writes "X11R6.5.1 is available for download from X.Org. This update has over 200 enhancements, including a large revision to XPrint, the unified X printing service. Press release is avaliable online." It was announced back on the 15th, but it's now availible for download - time to clog some bandwidth pipes!
This discussion has been archived. No new comments can be posted.

X Consortium Announces X11R6.5.1

Comments Filter:
  • by Anonymous Coward
    In the BSD case, the 4.x numbering scheme started because AT&T objected to the name 5BSD, on the grounds that it would be confused with UNIX System V. In truth, they were probably (and correctly) afraid that BSD would have moved on to 6BSD before UNIX System VI came into being (it never did), making BSD appear to be a `higher version than AT&T UNIX.

    At any rate, the AT&T demands were accepted, so 5BSD became 4.1BSD, what would have been 6BSD became 4.2BSD and so on. There's nothing after 4.4BSD because it was the last new version of BSD released before the CSRG disbanded (with the Lite and Lite2 revisions addressing copyright issues with AT&T).
  • Alpha transparency would be really nice in the case where you have a window hanging out in the background of your display that doesn't change much, or has stuff that you don't need to devote your whole attention to, like a log file or biff or something. I would like to be able to set my focussed window to be 25% transparent, so I can still work with it, and still easily see through the front window to the back one in case something interesting happens there.

    Then, a window would really be a like a window, not like an opaque peice of paper.

  • The BSD version numbers were legally required to stay at version 4.something. I think AT&T made this requirement in the early 80's when they started selling System V (because bigger version numbers are always better :-). I don't have a link to back this information up though. If somebody could provide that, I'd appreciate it.

  • Yes, there really were 10 complete versions before X11. There haven't been significant changes to the core since X11 Release 1, so the major version hasn't changed in a while. The changes have mostly been in the extensions (eg, XPrint is the big change for R6.5.1).
  • Quite often because the specs released to XFree86 are incomplete. An example would be the NVIDIA 2D driver. The official NVIDIA 2D driver uses DMA and AGP and is 80% faster than the XFree86 NVIDIA 2D driver which doesn't use DMA or AGP.

    There are some cards where XFree86 has been given full specifications and so the 2D and 3D speed is faster than the corresponding Windows driver. I'm not kidding. Check out the Matrox cards.

    Also don't mistake Microsoft. They have some of the world's most talented coders and they work on this fulltime. I wouldn't be surprised if their rasteriser is light years ahead of everyone elses.
  • how is this relevant to X?
  • Ok, what's the difference between this new X11R6.5.1 and the X11 4.0 that was just released earlier this year? I'm not as interested in licensing differences as I am in features. I'm quite confused now. What do I get from a Download at and what do I get at a download from
  • I'm still holding out for X11R6.5.6.32L27.2.34a++

    *That*'s going to be the version to last...

  • Wow.

    Takes me half an hour.

    And that includes the time taken to eat the pizza.
  • How can you not know what X is? I would think anyone who read this site often enough to have an account would know how important X is to the Unix community. Yes, this is a software release announcement, but it's not just a new release of some unknown console game. X is important enough to be announced here.
  • I think it's a little more involved than just adding a few more lines of code and releasing. The sample implementation they release doesn't have to include all the video drivers that people would need, it doesn't have to be optimized (Which would probably take a lot of time), and they don't have to support it. They probably realize that all the above stuff is being done by XFree86 (Who would probably still do it even if X.Org did it themselves... liscencing issues and all), so why bother duplicating the effort that they might not be able to accomplish? I'm not sure how many people they employ down at the Open Group, but I'd rather they spend their time working on what they can add for X11R6.6/X11R7/X12 rather than writing drivers and replying to bug reports.
  • As the previous two people who replied didn't bother to read what you wrote and tried to slam you for complaining, I'll step in and tell you what X is.

    Under Unix, the display for a GUI is handled by a user level process. This program is called the 'X server'. It runs on the small box on your desk, and so it seems unintuitive for it to be a server, but that's what it is. It serves up access to your display to the various programs that want to use it.

    One major defining feature of X is that it uses a stream protocol that can go over a network. This means the program that displays stuff doesn't have to be running on the same computer that displays it.

    Another defining feature is that X is all about 'mechanism not policy'. It provides a way for programs to put up windows on your screen and things but doesn't say much about how they should act or look. Some people like this, some people don't. I fall in the first camp. It means that there isn't a 'standard desktop environment' for Unix, but it also means that we're free to switch desktop environments without breaking all of our programs.

  • Now that's a pretty nifty project!

    I wish more focus would be put on projects like this rather than desktop environments at this point,
    as these advanced graphics extensions would benefit every project greatly, including the desktops.

    I know I'd love to have my X fonts antialiased, antialiased/alpha'd icons, and maybe a translucent GTK theme... ;D


    // \\
    /( )\
  • Hmm... sorry to bring this up, but I couldnt resist. What you should have said is "We don't have to spell correctly"

  • If you actually examine the RELNOTES.TXT file on the ftp site, apparently you can build an Xserver for XFree86 so I think I'm just going to download the source and try just that.

    AFAIK, XFree86 takes the code that the X Consortium has developed and changes it in such a way as to make it x86 native. I'm not sure if that's the way it is.... is it?

    Either way it's nice to know that a new version has been released and it's exciting even if it doesn't directly effect XFree86, yet.

  • Why don't you tell us what those differences are ;)
  • Oh, and get some good lawyers....
  • Who knows, maybe someday, while you're sitting on your ass bemoaning yet another typo

    Like you know anything about me...

  • You have just been trolled.

    He just wanted to get a rise out of people by saying something completely stupid and inflamatory. Something like that. I've been to the site many times looking for news or interesting stuff. It just never happens. They don't fix typos and as trivial as that is, it fits in with the overall impression the site gives - X is dead, it's all over. Compare this with the hotbed of activity at sites like XFree, UtahGLX etc. Then of course there was the attempt to take X11R6.4 proprietary, everyone should laugh at them for that. A friend of mine said it was a way to stop all the freeloading of those free software types who never had the decency to cough up for the Motif license all X users were supposed to need. Needless to say he and I don't see eye-to-eye on these issues.

    Of course since I've only been working full time on X apps for 6 years maybe I don't understand the true subtlety and greatness of's stealth approach, unlike all you X programming demigods hanging out here flaming on slashdot (heh).

  • And I wish more focus would be put on *important* projects.

    Who gives a fuck about transparent windows and menus? Frigging useless eye candy for arts graduates.
  • I think he knows what X is, guys, I sure do, but I have absolutely no idea why X11R6.5.1 is important to *me*.

    You hit the nail on the head. A couple of the other posts suggest that I don't know what X is. Of course I know what X is! (I'd have a hell of a time using Linux if I didn't!) But does "X11R6.5.1" mean the same thing as just plain "X"? And what about "XFree86"? Just a couple technicalities that your average, casual Linux user (i.e. myself) might not know by default.

  • What about SMP processors?

    Couldn't a kernel be designed to split the
    processes of server and client software between
    2 or more processors? For example, one processor
    handles all the server requests and another
    processor handles all the client requests within
    the same system?

    If we are SO Client/Server oriented, then is it
    not worth having a system designed to where it
    embraces that design philosophy down to the
    hardware level?

    This is already done as far as having seperate computers act as servers to seperate computers acting as clients. But now that alot of software on a SINGLE computer interacts in a Client/Server way on a single CPU... perhaps its time to have
    more then a single CPU and or data bus lines?

  • Hey, you can get netatalk-1.23.35-beta-2-asun34... (I'm probably exaggerating, but that's the latest release, I believe.)

    -grendel drago
  • You have just been trolled.

    He just wanted to get a rise out of people by saying something completely stupid and inflamatory.
  • I refuse to reply to this message because you didn't put two line breaks after my quote, and you quoted using bold.

    Oh, wait a minute. You were serious?
  • Ok, can somebody educate me as to where this actually fits in with XFree86? Is this just a release of X libraries? Or is this a non-free version of X that is competing with XFree86? Where does it fit in? Can I download it, recompile it, and then stick it into my XFree86 install and magically gain new features?
  • Linux: The OTHER OS that's bloated as Windows.
  • Those releases are the reference implementation, which is taken almost "as is" by most vendors.

    Only the hardware specific parts of the X server (the rest of X is not hardware dependant) must be adapted to a particular platform (this is, for the PC architecture, what XFree86 does). The rest is 99% the plain release from

    Also for many HW platforms, support is in the release. I used to get their releases and recompile it myself on Sun, AIX, SGI etc for years.
  • For the record, this is one of those stories that seperate the Newbies from the Hackers.

    Does anyone have a better Slashdot example of where so many people don't know what is going on?

  • Putting video _drivers_ in kernel mode is not evil. Unfortunately, it is something that linus itself did not want (See the GGI flamefest) so video drivers have been implemented in X. This is evil too. X should not need to run as root.

    Now this is an interesting point. Putting the video drivers into the kernel (ala GGI/KGI) is a stability gain, but it unfortunately reduces the overall performance.

    The reason why is obvious. Not only do you still have the overhead of user space context switches between X clients and the X server, you now also have a kernel context switch for every syscall that the X server makes to the kernel driver. Previously the X server ran as root so it could just diddle on the hardware.

    And there's another problem. Modern video cards are not simple framebuffers. They have all sorts of 2D accelerated operations built into their high speed GPUs. In many cases these GPUs are extremely sensitive to the commands you send: it is trivial to lockup a GPU so it won't receive further commands. So you have two choices:

    • User space acceleration library with shared locks amongst all clients to coordinate access.
    • Kernel space acceleration library which then abstracts the acceleration features.

    The first method is slower than just putting the acceleration features into the X server and running the X server as root. The second method is what Linus is deathly afraid of: hundreds of video card drivers, each with 10-20kilobytes of acceleration code, all very specific to a video card series, each one of them API incompatible with all the other video card drivers.

    Now a good solution is fbcon. The kernel does know enough to initialise the video card and change video modes but refuses to deal with acceleration features. This way it can handle all virtual console changes, even between graphics and non-graphics consoles. This ensures the fbcon driver is small. You also avoid all the common lockups. The X server then lets the kernel handle mode switches, but the X server takes over the GPU and bangs on it like crazy.

    It is unlikely GGI/KGI will ever be officially adopted. The fbcon drivers do 99% of what people wanted but with 1% of the code and complexity. It is a difficult problem on UNIX because of the distinction between user/kernel space and because of memory protection between applications. Lesser platforms can cheat and get faster results. To solve this problem neatly on UNIX is hard, and it is taking time, but Linux/XFree86 are slowly but surely getting there.

  • I was under the impression that this would be a fully functional Xlib (probably using BSD sockets). I don't know for sure because I've never actually downloaded their 'sample implementations'. Of course they couldn't make an Xserver (except maybe an Xnest type deal). The question of *why* you would want to install it is interesting, considering your X server won't support any of the new extensions, but I'm skeptical as to just how much it would break.
  • Most people don't need to pipe windows through the net.

    Maybe so, maybe not. But what about those of us that do? Networking is the biggest strength of X as far as I'm concerned. I couldn't do my job without it. I support an application running on a server in the USA from here in the UK. More mundanely, it lets me run apps on a server in our machine room, and display them on my desktop machine. I do this all day, every day. You may be correct in saying that most home users don't need X to have networking capabilities, but corporate users certainly do...

  • X11R6.5.1 is important. It will help us to read slashdot in bold new ways, where non X users have not gone before. Its lightyears ahead of other GUI's, allowing for quite some time to remotely run X applications from across the universe directly on your computer screen.

    Better technology to better advance technology. That's what its all about! Slashdot puts the N in Nerds!
  • I was talking about the Windows95 GDI.

    And I was talking about the person who said:

    Then why is it so bloody slow?

    I can't believe that the Windows GUI speed is simply due to its integration into the kernel.

    who may or may not have been talking about the Windows 9x GDI.

  • For example, the Windows GDI is notoriously slow. It is largely 16 bit code that needs to thunk in every call to it. However, MS managed to get it to a decent speed by rewriting parts of it in ASM and putting it into the kernel.

    Was the person who spoke of the Windows drawing code having been put into the kernel speaking of the Windows OT drawing code, or the Windows NT drawing code? If the latter (i.e., the move, from NT 3.x to NT 4.x, of the drawing code from, err, umm, a server process to which the gdi32.dll library sent messages to kernel code to which the gdi32.dll library made system calls), then that code isn't 16-bit code, as far as I know - the Windows OT GDI code may, as I have heard, be 16-bit, but I rather doubt the Windows NT code is.

  • 3. Client/server architectures for graphics will always be slow. The fundamental issues are: (a) A client/server architecture requires several context switches for each call to the graphics system..

    Not true of X as originally envisioned. X is supposed to buffer possibly thousands of calls into a single context switch. This can reduce the context switches considerably below the one-per-call required by a kernel implementation.

    The biggest problem with X is the huge number of calls requiring synchronization because the program has to get a response before doing the next call. For instance to draw in red, the program has to send the "allocate a color cell with red" call, wait for the response with the number of the color cell, and then use that number. This introduces a synchornization that can slow it down by several orders of magnitude. There is no reason the interface could not be "allocate a color cell with red and I will call it N from now on" and that can immediately be followed by a "use N as the color call". Some intelligent design like this would solve a lot of X's problems.

    Buffers do have a few problems:

    Although very fast at throughput, they have latency problems, as nothing is drawn until the buffer is filled and sent. Trying to solve the latency loses the advantage of buffers in the first place. But I think latency problems will show up in the code anyway, so I would prefer that the graphics not try to solve this at all, but concentrate on fast throughput. Also check interactive net games, which have been fighting real latency problems for years, for better solutions.

    The other problem is the overhead of filling the buffer and then parsing it. But this is a completely false assumption. The savings of being able to send a pre-filled buffer (ie a canned sequence) with a single call, and the amazing simplicity of sending possibly millions of graphics calls to a coprocessor, greatly outweigh any overhead of buffering.

  • Yea, yea. I am not an idiot, despite your apparent belief. Obviously any interface saves a context switch if it is in the kernel. In fact the best way to reduce context switches is to put everything including the user program in the kernel. Whoa! I've just invented CP/M!

    I think everybody else here equates "kernel implementation" with "lots of different little calls to the kernel" while "not in kernel" typically means "buffered". You may be confusing this with NT where "not in the kernel" meant "many little calls that do 2 context switches" (or as I have argued, with X, which due to bad design has managed to reduce a buffered implementation to a many-call implementation)

    Obviously a buffered implementation can be put in the kernel, thus saving a context switch. But this is a trivial savings compared to the savings of millions of context switches that the buffer itself provides. Most of your suggested techniques for speeding up a kernel implementation amount to implementing buffer operations in a user-level library.

    But a buffer is not what NT does, and is not what any of the proponents of a "kernel implementation" are thinking of.

  • by Tack ( 4642 )

    Why does the X consortium produce software that nobody uses? I really don't understand the concept of actually coding a sample implementation that nobody uses.

    The purpose of this release is a reference implementation of the new specification. It has no support for the myriad of video cards and so is rather useless for most desktops. But this isn't the point. They're not going to spend their time tweaking video drivers when the real purpose of a reference implementation is a proof-of-concept of the specification.

    Releasing a specification without a reference implementation is just a bad idea. I'm willing to bet they revised the specification many times in the process of coding the reference implementation because they found parts of the spec that just didn't make sense or were just plain broken in practice.

    Herein lies the whole point of a "sample implementation that nobody uses." It proves the specification is sound in practice. (Or at least, much more likely to be sound.) And incidentally, OMG won't adopt any spec unless it's been implementated at least once for this very reason. This is not wasted effort. It is sound software engineering.


  • You're missing the point. This is like having a story saying that someone is suing some public official, but never mentioning for what reason.

    "Foo version X.Y.Z has been released" isn't nearly as useful or informative as "Foo version X.Y.Z has been released, adding this feature and that bug fix".

    If all we wanted to know was that a new version of X has been released, you can fit it into maybe 3-4 words.

    - Jeff A. Campbell
    - VelociNews ( [])
  • I can't speak for the current X11R6 tree, as I've not seen it recently (having not needed pure "X" in years), but in the past, releases DID include a working X server, but for specific hardware used by the members of the X Consortium that funded the work. In particular, the X11R6 distribution included an X server for Solaris and Ultrix boxes at the time, and likely now supports Tru64 (or whatever they're calling what was once OSF-1 at the time) and Solaris.

    These X servers were hardly optimized, but they did correctly implement the server side of the X protocols and all extensions provided by the X consortium.

    As for "sample implementations?" Its rare for a vendor to make changes in Xlib or Xt. The only "sample" is the X server. Xlib connects the the server by several means now, depending on the DISPLAY variable. Sockets aren't the only means provided by Xlib. In fact, before "Direct X" meant something to microsoft or to some XML dealer, it was the original name for an implementation of the C Xlib library that overrode the server and drew directly to the screen, provided the DISPLAY variable allowed that.

  • A kernel rebuild on my machine takes about 10 minutes. (PII 350, 128MB RAM)

    Installing the NVIDIA drivers takes about 5.

    And what are you using to do NAT on NT? That has a big impact on how long it takes...

    And as for ALSA... not really an issue if Mandrake detects your soundcard out of the box (like it does for most of them these days).

    (BTW. Using the stock kernel doesn't really have all that much impact at all on modern PCs.)
  • Maybe you'd like to show us a non-kludge-riddled antialiasing algorithm that doesn't need an alpha channel?

    Alpha blending lets you make transparent terms, menus, etc. It also allows you to blend the pixels at the edge of text characters with the window below them (a process commonly known as antialiasing)

    As for your other concern, have you installed a recent distro on fairly modern hardware? Mandrake 7.1 installed Xf86 automagically for me (on my 3dfx Voodoo3 and NEC monitor)
  • it's a complete implementation, with with little or no hardware support.

    vendors of X software (xfree86, sun, ibm, etc) add support for their hardware and customise/extend X to suit their environments.

    in the linux world, xfree does the hardware support end, while the distributions tailor X to work however they have laid their system out.
  • I absolutely refuse to pay any attention to anything X.Org has to say until they fix the typo on this page

    Fortunately for the rest of us, most techies spend a great deal more energy writing and fixing code than checking their grammer, spelling, and adherence to linguistic dogma.

    While you continue to bitch and moan about our spelling, lax grammer, or bad prose, the rest of us will go on producing software that will continue to be the envy of the Closed Source world.

    Who knows, maybe someday, while you're sitting on your ass bemoaning yet another typo, we'll revise the written English language itself into something a little more coherent, purhaps yusing thuh fonetik alfabet thuh wae it wuhs intended in thuh furst plaes, fonetiklee.

    Then it will be you who can't spell.
  • It's because as software gets older and more popular, there are fewer changes requiring a complete rearchitecting of the system. This is because such changes become harder both because the software is bigger and more complex, and because such changes usually introduce compatibility problems which a larger user base won't tolerate. It's a natural thing IMHO.

  • Ya know, I used to believe that. Then the vendors who had to do with X DID get together and dictated policy.

    And we got CDE. (The committee Designed Environment).

    No, better to let groups like GNOME, or KDE dictate policy now.

    In retrospect, yeah, we needed style dictates, but in 1989, not 2000. We NEEDED a heavy hand to say "All applications shall have a menu, and EXIT shall be under the FIRST menu item." That randomness still haunts us (xv, WordPerfect, FrameMaker, xterm, etc EACH have different ways to exit). The big change is that the Vendors are now on the sidelines. Sun has Pissed DEC out of existance, SGI is wallowing in a ditch, IBM has begun to figure out what happened. HP, well, they still have HP-UX.

    In the meantime, the Open Sores guys have taken the reins from them and started to DO something to counter Windows. Frankly, I trust them more than [vendor of choice here].

    gnome/kde efforts have dealt with a lot of this quite nicely, without the brutal overhead of CDE type things. Fine. And about time. Let the best interface win.

  • I've put a mirror up for users in Australia and
    New Zealand as the sites appear to be very

    From AARNet's Mirror Project:

  • Possibly, but I think the same thing goes for quite a few titles in the 'For Dummies' range. If you can have an E-bay for Dummies........
  • availible? The sound of avian speech?
  • This may be a really stupid question, but if I compile and install this over top of my Xfree86 4.01 installation, how badly will I have ruined my system?
  • you can recompile your code with the new Xlib stuff right away..of course the features of the new stuff will not show up if it requires too many changes to the X most cases the new stuff works even with the old X servers...just make sure you upgrade your X libraries and headers.
  • I think he knows what X is, guys, I sure do, but I have absolutely no idea why X11R6.5.1 is important to *me*. It would have been nice if the article reported on what the software changed, why it was deemed to be worthy of an announce on SlashDot instead of just freshment, why I should care enough to download it, that kinda thing.

  • by styopa ( 58097 )
    Although I think someone is working on the Y-windowing system. I think that the first release should be Y_0 (Y sub 0). Which in science is pronouced "why not."
  • Huh? What phsychology? Development for Be does NOT suck that bad. BeOS IS good enough to attract good programmers. What the hell are you talking about? My point is instead of writing yet another GUI toolkit for X, go help fill in the gaps in the BeOS software line.
  • Wow, good for you. What kind of machine are you running where it takes less than half an hour to compile X 4.0? How do you configure and recompile the kernel in less than half an hour? (Or do you actually use the stock kernel! Did you every look at how much bigger it is!) How long does it take you to fuss with the NVIDIA drivers? How did you get ALSA installed in 30 minutes? Sure I can install Linux in less than half an hour, but how long does it take to CONFIGURE the thing as a USABLE desktop system. Windows NT, from inserting the disk, to NAT server takes an hour or so, including time to tune everything. Linux takes a LOT more than that.
  • XF86 is far from being a problem. However, the fact that I have to edit the (largely undocumented from an ALSA point of view) /etc/modules.conf file to install my soundcard, or give the bloody thing a IRQ (it's plug and play hardware for god's sake!) is what pisses me off. Or the fact that the kernel breaks the NVIDIA drivers every few days. Or the fact that it isn't documented whether or not you should turn on ISAPNP in ALSA if you've already got it in the kernel. How about the fact that KDE2 not only doesn't have Slack packages (the latest ones are 1.91) but has a compile system where I've got to cd into a dozen directories, and wait half and hour for each one to compile. THAT'S what pisses me off.

    BTW> Slackware kicks ass. Getting networking and NAT configured in a few mintutes was awesome. I never did like SysV scripts.
  • Your assumptions are wrong? Why the hell would I compare a tuned desktop to and untuned one? (Why would I use Slackware if I had an untuned system?) The only things running in the background are the things I can't kill. I don't even run atd. As for running two desktops, that is to represent the fact that you HAVE to run both or else not be able to run all the applications. (BTW, the NT machine actually has more services running, though I've got NAT on both, NT is acting an ftp server for my network, and also has some rpc services that I don't start in Linux)
    As for NT not using memory, qualify THAT! And your concept of memory use is twisted. The SYSTEM should NOT use memory just because it's there. The system should leave as much memory as possible for the APPLICATIONS. I could care less if GNOME didn't exist, I'm running the applications not the DE.
  • Actually, these days local stuff is done through sockets. And X uses shared memory as well.
  • I don't really understand the way X is released. Is the thing the X group releases a sample implementation or what? Is is actually a usable product?
  • Why? BeOS is an orphaned product. Be has shifted focus to the Internet Appliance space. BeOS is
    just for diehard be fans.
    BeOS is NOT an orphaned product. Let's see, you've got the upcoming network environment, the new accelerated OpenGL implementation, Opera 4.0, and Java2. Oh yea, Be has TOTALLY abandoned BeOS. (And none of this is vaporware, OpenGL and the networking environment (BONE) are deep in beta, and JavaSE is already running on BeIA.) Also, the shift is designed so any improvements to BeIA can be quickly rolled into BeOS. Seeing as Be has just made some deals with companies like Compaq, I'm thinking they are far from dead.

    What should happen is we should take the good parts, and not so much source but the ideas, of BeOS and the Be API and incorporate them in the toolkit(s) for X and the kernel.
    Oh yea, add yet another toolkit to X and make it even MORE bloated!

    The only big advantage BeOS had over Linux IMHO was the fact that it came with a journalling FS out of the box.
    Sure that's the only advantage. The 3 millisecond latencies (BTW that's not hype. Read the BeNews article about the hardware company that shifted from using a proprietary OS to BeOS for their mixing hardware) great handling of video, easy API, and fast GUI are no big thing. Not the mention the utter "zen" of the user-interface. When I see something as cool as Cortex on another OS I'll be impressed.

    These days all my servers run off of ReiserFS.
    ReiserFS STILL doesn't have database capabilities (it's planned.) BFS has had database capabilities for years.

    I just downloaded the CVS tree from SGI XFS server, going to give it a spin on a spare box next week. Now if someone would produce a pervasively multithreading toolkit for X I'd be all set.
    Great, yet another X toolkit! Really, do you think some new APIs and another toolkit are going to defeat the 20 million lines of code and 30 years of UNIX baggage (not all of it bad, though) that make up the average Linux system. Face it, Linux may be a great system, but in terms of sheer speed, it still doesn't compare to BeOS. I don't use BeOS because I'm a fanatic. I have both Windows and Linux (Slackware 7.1, a man's distro) on my system. I don't use BeOS just for the hell of it, I find myself being more productive in it. I like the API, the user interface, and find the applications on the platform to be very original, if a little lacking in features. I like the tight integration between the CLI and the GUI. I like the fact that the workflow is so fast, and that apps are so easy to install. For example, I just downloaded Jikes for BeOS. The installed automatically configured BeIDE for it, and all I had to do to uninstall it was delte the folder.

    So fine, critize BeOS for what it lacks. I'm totally okay with that, in fact I'll point some stuff out for you right now.
    A) It doesn't expose hardware acceleration comparable to what DirectX does.
    B) It doesn't have as nice of a joystick API as DirectInput.
    C) I lacks a decent web-browser.
    D) Replicants are weak compared to systems like OLE or OpenDOC.
    E) It lacks an object model.
    F) Navigating multiple browser windows is awkward.
    Complain all you want about valid problems. However, don't belittle it by saying that Linux is just a toolkit and an API away from bettering it.
  • I'm talking FUNCTIONAL distribution. The Linux kernel is a work of art, and that is more or less what that 1.44 MB distro is. The core part of BeOS (which includes all the servers, the windowing system, tracker, etc) is about 16.5 megs (700K kernel, 3.5 megs servers, 2 megs tracker, 10 megs for libraries, everything from OpenGL to C++ and C libraries) If you can find a functional Linux distro that has the GUI, a desktop environment, runs any available Linux app, and fits in 17MB, I'd like to see it. As for my comment about bloat, it's true. My system, Slackware 7.1 with GNOME 1.2 and KDE 2 Beta3 takes up more memory at startup than WindowsNT 4.0
  • WTF are you talking about. I AM talking about WindowsNT. Right now I'm running NT4, I don't even have Win9x installed. And it takes only one reboot on NT. As for setting up a Linux box, I've set up a RedHat, a Mandrake, and a Slackware (my distro of choice) on my system. I've configured both Linux and NT into a functional desktop OS (meaning I installed NVIDIA graphics drivers, ALSA drivers on Linux, and installed both GNOME and KDE.) I can tell you right now, NT4 was set up in hours rather than the three days Linux took me. (And I'm not a Linux newbie, I've still got Slackware 3.5 CDs.)
  • I was talking about the Windows95 GDI.
  • 1. The Windows 9x GDI contains a lot of assembly code, including some 16-bit code which runs in
    virtual 8086 mode. In this mode, there is no distinction between user and kernel mode (or ring 3 and
    ring 0 in Intel-speak), because this functionality didn't exist until the 386, and the Windows 9x GDI
    has never been a client/server architecture, so talk of 'moving GDI into the kernel' on Windows 9x is
    complete and utter nonsense.
    I've been duely chastised. However, I do have to point out that during Window 95's release, more of the GDI (which is almost all 16 bit) was rewritten in assembly. This was a major selling point of Win95 against NT. As for being in the kernel, the Windows 95 architecture is so confusing it might was well be. This is how I understand it, correct me if I'm wrong. Most of Windows consists of a set of DLLs (including the GDI) that are mapped into the address space of the application. I consider anything in these DLLs to be more or less in the kernel. However, a call into the GDI causes it to switch to the Win16 VM which runs it in V8086 mode. My question is this. Isn't the Win16 VM running in protected mode? As I recall, the real mode version of Win 3.1 really didn't work very well.

    2. The NT GDI was designed as a client/server architecture, and contains no 16-bit code whatsoever
    (or thunking, except when 16-bit applications are run, and their 16-bit calls are thunked to 32-bit
    Win32 APIs). On pre-4.0 versions of NT, when an application made a graphics API call, this would
    invoke an LPC (local procedure call) into the CSRSS (Client/Server Runtime Subsytems) process,
    which would then carry out the graphical work. With NT 4.0, the GDI was restructured, and more of
    it was moved into kernel mode (the video drivers, of course, have always run in kernel mode, since
    they require access to the hardware). The impact of this change is often exaggerated, ignoring the
    simple fact that a CSRSS crash would bring down a pre-4.0 version of NT anyway. Similarly, an X
    server crash on UNIX brings down all of the applications which were being run in the X session, and
    often the system as well. In other words, the effect is largely the same. The exception would be
    UNIX servers which are also used as interactive workstations, in which case an X crash might not
    bring down all of the services which were running. Of course, using a system both interactively, and
    as a critical server, is an extremely bad administrative decision.
    I was never talking about NT. However, the moving of the GDI did have a large impact. Graphics performance improved quite a bit.

    3. Client/server architectures for graphics will always be slow. The fundamental issues are:
    (a) A client/server architecture requires several context switches for each call to the graphics system
    (at least client -> kernel -> server -> kernel -> client, often more, since the server frequently has to
    communicate with drivers running in kernel mode), where as a kernel-mode architecture requires
    only two (client -> KM server -> client), and a kernel-mode server can communicate directly with
    kernel-mode drivers without any context switching.
    Not entirely true. For example, the BeOS uses a buffered graphics API. Graphics calls are batched and sent when the buffer is full. So the result is a context switch into the kernel to send the messages. When the graphics server is next scheduled, it will carry out those messages. BeOS uses a dual-mode graphics driver API. The majority of driver functions run in a user-space module loaded by the graphics server (called an accelerant.) The only time the server has to switch into kernel mode are to handle shared resources and do interrupt handling. Everything else (including primative acceleration) can be done through the user-mode module. In practice, this method is pretty damn fast.

    Finally, the Windows GDI is not 'notoriously slow'. In fact, the excellent graphics performance
    offered by Windows in the early/mid 1990s, based on the hardware acceleration of GDI routines, is
    one of the things that attracted me to the platform (coming from UNIX and Macintosh, which still
    used simple frame-buffer architectures). MacOS and XFree86 now take advantage of this GDI
    acceleration to some extent (since many of their primitive routines are similar to, or the same as,
    the GDI routines which the hardware implements), but Windows probably still has an edge in this
    respect, since it's what the accelerators were and are designed for.
    The GDI IS notoriasly slow. Coming from a game-programming POV, you'll notice that the use of the GDI is banned for everything except rendering text into bitmaps for later blitting. In fact, I've done some tests between the GDI and the BeOS graphics system and the BeOS graphics system tends to win.
  • Alpha transparency is nothing new. BeOS has it, Enlightenment has it, Windows 2000 has it, MacOS X has it, NeXT had the capability to do it, and all modern 3D graphics cards accelerate it in hardware. FSAA is something totally different. It would be a BAD idea to do full-screen antialiasing on the desktop. That would mean the whole desktop would be a little blurry. That's good in a game since it smooths thing out, but is NOT good for anything else. As for anti-aliased text, I don't know what's holding X back. Everyone and their mother has anti-aliasied text these days. (/. looks great anti-aliased!) As for configuration problems, I think it has to do with the fact that most cards are designed for winows and Windows tends to have more accurate plug and play.
  • Actually it's X11R6.4 (what XFree86 uses.)
  • by be-fan ( 61476 )
    Why does the X consortium produce software that nobody uses? I really don't understand the concept of actually coding a sample implementation that nobody uses. Coding a sample implementation is not that much easier than coding one releasable one. So why do it? Why not just do a little more work on it and release it as a usable product? It would speed stuff up too. Instead of waiting for XFree86 to roll in the changes (which, probably won't happen for several months) a new implementation would be usable when it is released.
  • Actually, for a desktop OS the integration between the CLI and the GUI is fantastic. If your using it as a desktop system, you've nearly everything you do in Linux. As for multi-user, in a desktop OS really isn't necessary. As for ftpd, check /boot/home/config/settings/network. If you want graphical configuration, grab the X server and go for it! You've got a lot of standard config stuff in /etc, or mostly text-readable ones in /boot/home/config/settings.
  • BTW> Can you script GUI apps through the bash prompt under Linux? Check out "hey."
  • A) The memory usage I'm talking about is real memory usage. I discounted file-buffers in both cases.

    B) The Linux+KDE Beta +GNOME=WinNT point IS valid. I think you agree that the base components are equal in both cases. The only services running on both are NAT, and I made sure to compile a custom, modular kernel with only the necessary items. (The kernel weighs in at 550K) I even made sure to not load PPP and SLIP since I'm using DSL not dialup.

    The KDE Beta2 is needed because it is currently the only environment that can use embeding and component technologies throughout the system. Since these services are built into WindowsNT (in the form of OLE and COM) they are necessary. Comparing NT to a lighter weight environment like FVWM or even plain Enlightenment wouldn't be fair as NT would have many more features. Since KDE 1.2 doesn't offer KParts, and GNOME 1.2's component services aren't totally complete, KDE2 Beta was the only choice. Given the fact that KDE2 is in very late beta, I think it is an appropriate choice. (Actually, KDE is needed for another reason. KDevelop won't run without it.) Now GNOME. GNOME is necessary, because there are several important applications that require GNOME. In NT you can neglect OWL, Qt, and Cygwin because no important applications use them. You can run 99% of all apps without them. The situation, however is different in Linux. Since GNOME has more than 50% of all the important "DE aware" applications, it has to be a part of the comparison. You might think it is unfair because of the duplicated code, but those are the realities of two incompatible DEs. (BTW I didn't count Mozilla in the mix. I can't stand Active Desktop. That gives Linux an advantage because IE takes less memory than Mozilla or Netscape.) Under these circumstances, Linux takes up more memory than WindowsNT 4.0.
  • I'm not using Mandrake. I'm using Slackware. Sound is not really a big deal using OSS either, but why settle for a sub-standard sound system? Of course, there is a trade-off. If I use mandrake, I have more problems networking since it doesn't detect my two ethernet cards correctly. Of course, the NVIDIA drivers SHOULD take 5 minutes, but it doesn't. First of all, it's not really a good idea to install the older Mesa RPMS over XFree86 4.0. However, Mesa is necessary because XFree4.0 by default doesn't come with libGLUT and libGLU. So I've got to get the Mesa3.3 RPMS (it seems that Mesa3.3 only exists in RPM and source) turn them into GZs and install them. Then, I've got to go through the ridiculous motion of deleting the XFree GL libraries. Then, I can't install the kernel driver because kernel 2.4-test6 and up broke the NVIDIA drivers. Sure I tweek my system, but I tweek NT as well. It never takes this long. If you want to live with a stock system, then okay, it doesn't take that long to install. But why live with a stock system? Installing isn't terribly hard if you're willing to live with less than perfect results. Configuring anything non-trival is the hard part.

    BTW> The stock kernel on RedHat 6.1 is NOTICIBLY slower than a custom kernel.
  • Actually your missing an obvious point. Client/server relationships make it much easier to asynchronos calls. Fill a buffer, send it off the the graphics card and get on with our processing. Since most calls are hardware accelerated, client-server architectures make it much easier to exploit the parallel processing characteristics of modern systems. Also, in cases like OpenGL, having a client/server model allows the server to reorganize the input data. Since a state change often lowers performance more than the additional processing, a client/server model turns out to be faster.
  • If a diverse number of applications require it, yes. What you don't seem to get is that if there are 5 different toolkits that each implement major, but redundant services, and the available body of applications uses each of the toolkits equally, then you HAVE to have them installed. For example, it is pretty hard to avoid having KDE or GNOME installed because there are several great applications (KOffice or KDevelop) that require it. I'm sure you agree that several redundant toolkits lead to bloat, and if people actually take advantage of those toolkits, it no longer becomes a mattar of what YOU want, but what the application programmer decided you need.
  • Well, my Riva TNT is detected as a Riva128 by that same distro. It doesn't detect my Intellimouse PS/2 correctly. The point is that Windows almost always detects hardware correctly while X doesn't.
  • Did they really pull of REAL alpha-blending ala MacOS X? The transparent terminal thing is possible in everything from Enlightenment to BeOS, but the effect goes away the minute the window below changes. If they pulled it off well, that's great.
  • Is it in the standard kernel? Obviously if it hasn't been integrated into the kernel, there are issues that prevent it from being integrated, no?
  • Nobody ever suggested using it for serious database chores. However, having basic database capabilities in the OS makes all sorts of cools thing possible. The catalouging of MP3s, the organization of "people" records, etc. That's the only type of database that it makes sense to integrate into the file system of a genral purpose OS. As for the old filesystem, back then the bfs WAS a database, but it was changed due to performance concerns.
  • It was actually released by, a new organization created by the Unix vendors to further the X standard after the Open Group took over. The actual central CVS repository is being maintained by Metro Link, with changes being submitted by all the members, including XFree86. [] has more details.

    As for the original X developers, Jim Gettys has been mentioned recently on /. for his work with Compaq's handheld computers, and Bob Scheifler is working for Sun on Jini technology.
  • Not surprisingly, there's still no builtin alpha channel support. Many Linux desktop users have been beefing about this for a while now, and I'm wondering if it will ever make its way into X.

    What about 3 years from now? Will we all be using Berlin? Will there ever be an X12? X11r7? Hell, why not start over and call it Y1?

    Just some thoughts.
  • My system, Slackware 7.1 with GNOME 1.2 and KDE 2 Beta3 takes up more memory at startup than WindowsNT 4.0
    disclaimer: I hate NT, I don't believe OS/Free software methods corrolerate to lower resource requirements, I do believe OS/Free software methods will win (long long term)
    Your NT box after booting runs nothing other than its desktop (assumption).
    Your Linux box after booting runs a bunch of background servers (assumption) and a desktop AND a BETA desktop.
    It would be a huge achievement for the KDE team (and slack and gnome to a lesser degree) if your linux box consumed more memory at startup. Also, don't forget that linux/*nix regards memory as a valueable resource to be filled (make sure we're using it cause we have it) where Windows regards memory as a precious resource not to use (make sure we don't waste any cause it's so valuable).
    IMHO, either don't make comments like that OR qualify them (with a link even) to describe what it is your really found, I would love to see good figures whoever wins that battle.
  • Maybe you'd like to show us a non-kludge-riddled antialiasing algorithm that doesn't need an alpha channel?

    This is how I used to do pseudo-AA text on an old paint program on a Macintosh computer:

    1. Scale graphic to three times its normal size.
    2. Draw text and lines.
    3. Scale graphic back down.

    ( \
    XGNOME vs. KDE: the game! []
  • Anonymous coward, prepare to be innovated.
  • Just out of curiosity, Hemos:

    time to clog some bandwidth pipes!

    You download it... and what exactly do you intent to do with it? I mean, this is the sample implementation, and even if some stuff there is not present even on the XFree86 4.0 tree, unless you intent to merge the changes between R6.4 and R6.5 with XFree86 overnight... well you get the idea...

  • by Grendel Drago ( 41496 ) on Thursday August 24, 2000 @10:54AM (#830298) Homepage
    ... is why version increments keep getting smaller on venerable standards. For instance, if you look at the early days of UNIX, SVR3 was soon followed by SVR4, and BSD went from 4.2 to 4.3 to 4.4 in a reasonable amount of time. Likewise, we went from X11R5 to X11R6. But now, we're stuck in X11R6.5.1.blah.

    Is this TeX syndrome, where the version number asymptotically approaches some ideal number? X is already past 2*pi, but I'm sure there's a constant they're working toward...

    -grendel drago
  • by be-fan ( 61476 ) on Thursday August 24, 2000 @09:34AM (#830299)
    From UNIX Unleashed...

    "X WindowsThe first commercial release of X Windows was X10.4 in 1986, and was the basis for some commercial applications. The next release was X11R1 in 1987, followed by X11R2 in 1988. Version 11 was a complete windowing package that outperformed X10 in its speed, flexibility of features, and styles for multiple screens. X11 and later versions have become the de facto standard GUI for UNIX systems and are, therefore, the focus of this chapter."

    Thus, X1-9 were apparently inhouse releases (just like the first several UNIX releases.)
  • by be-fan ( 61476 ) on Thursday August 24, 2000 @09:47AM (#830300)
    The reason it's so slow is because it really wasnt' designed with current desktop configurations in mind

    A) It really wasn't built to take good advantage of powerful client machines. XFree86 has really helped in this regard with the XFree4.0, but the architecture is set in stone and there is only so much they can do.

    B) It wasn't designed to take good advantage of hardware acceleration. Again, XFree has really helped with the rewrite of XAA, but they can only do so much.

    C) There are many protocol limitations. For example, the reason they are having so many problems with anti-aliased text is that X only sees a font as a monochrome bitmap. Also, TrueType fonts are a bit of a hack on X, and general font support is poor. These are things about the protocol that just have to be worked around.

    D) It seems that the API is pretty inefficient. As somebody pointed out to me a while ago on /., you really have to do a lot of things in X that shouldn't be necessary. (What I really want to see, though is a windowing system that totally ditches the concept of a palatte. Screw the 256 color users, let them put up with automatic dithering.)

    However, the client server model really doesn't seem to be THAT big a problem. It is probably largely due to poor design decesions. For example, the Windows GDI is notoriously slow. It is largely 16 bit code that needs to thunk in every call to it. However, MS managed to get it to a decent speed by rewriting parts of it in ASM and putting it into the kernel. The GDI is actually just a DLL that is loaded by the client application. There isn't a server there. Despite these hacks, the GDI is still slow. (Though not slower than X.) The BeOS API, however, uses messaging and a client server model. Ask anyone, they'll tell you that it's the fastest GUI around.
  • by JordanH ( 75307 ) on Thursday August 24, 2000 @09:02AM (#830301) Homepage Journal
    I remember when the X Consortium went out of business a few years ago. The stewardship of X got folded into The Open Group at that time. All (or most all) of the original X developers moved on. Where are those guys now?

    Seemed a little odd at the time that the work on Broadway was just finishing up (or was done) and the X Consortium went out of business.

    Of course, looking at the splash Broadway has made, it's not surprising.

    Gosh, is anyone using Broadway out there? It seems like a good idea. Extend your X apps to browsers and still have the native X application. From what I've heard, it's slow, hard to use and immature as a technology.

    Anyway, back on topic here. Who is doing this work for The Open Group and why? Is this being driven by the Unix vendors needs for new features?

    -Jordan Henderson

  • XFree86 4.0 was released earlier this year, not X11 4.0. XFree86 4.0 was based on the X11R6.4 sample implementation. This is a new release of the core X code, to which XFree86 adds support for the various video cards, and other additional features.

    Other than X hackers, most users are best waiting for their particular X vendor (XFree86 for most Linux/*BSDs, Xig for some, Sun/IBM/Compaq for users of their Unixes) to incorporate the
    changes into their release.

    As for license differences, the licenses are basically the same, with just the copyright owners
  • by gfxguy ( 98788 ) on Thursday August 24, 2000 @08:30AM (#830303)
    XFree86 4.0.1 is XFree86's implementation of X11R6.something.

    The X Consortium defines the standards, makes the sample implementation, and then XFree86 rolls it into their implementation (at some point). Or, of course, maybe Xi Graphics, or one of the other commercial companies, will also create their commercial versions.

    If you look in your directory hierarchy, under /usr, you'll see that, even though you have XFree86 V4.0.1 installed, the directory is called X11R6.

  • by Benley ( 102665 ) on Thursday August 24, 2000 @08:21AM (#830304) Journal
    Arg! As I'm sure many many many people will confuse this, what's just been released is NOT a version of Xfree86! It's a sample implementation of X11R6.5.1, not a usable X server for your linux box.

    It's unfortunate that so many people are unclear on the difference between X and Xf86, and even what X really does.

  • by ceswiedler ( 165311 ) <> on Thursday August 24, 2000 @08:56AM (#830305)
    Why is it called X11? I understand what X is, but I find it hard to believe that there have been 10 complete versions before this, if each is ALSO numbered with a major/minor number (6.51). Has it always been called X11? Is there a reason?
  • by Seanholio ( 198338 ) on Thursday August 24, 2000 @08:34AM (#830306)
    In cases like this, it would be good for someone knowledgeable to illustrate the differences, rather than simply expressing displeasure at the ignorance and audacity of those who aren't in the know, but willing to try.

    I can see where it's easy to confuse X Consortium releases with XFree86 releases, since most books, HOWTO's, FAQ's, etc., don't touch this topic at all.

    I have only the vaguest ideas of what those entities do, but I'd love to know more.
  • by be-fan ( 61476 ) on Thursday August 24, 2000 @09:16AM (#830307)
    You know, the best thing for X would really be to dicate a bit more policy. The whole concept of X is to provide the low level services that higher level window managers need. Thus, X can provide a common foundation, while window manager provide the actual user-interface. However, this concept has faded in recent times. Now, you have things like GNOME and KDE implementing things that really should be in X. Things like printing services, imaging systems, and object models, that aren't really part of the user-interface, but part of the lower-level "services" layer that X provides. The benifits of integrating more of this into X are obvious. Instead of having the two competing desktop environments that you have now, you would have a common base of X windows applications that would work in any window manager. In the process, there really isn't any freedom lost. Are the two desktop environments really that different? Aside from the look (which belongs in the window manager anyway) the two environements pretty much provide the same services in more or less the same way. Sure you know have one object model, one imaging system, etc. Of course, you only have one graphics system for X, you only have one X input API. You can't choose the input subsystem for X, so why should you be able to choose the object model? For that matter, why should you care? At some layer of the system, you have to standardize something or all hell breaks loose. SOMEBODY has to dictate policy or else you end up with a sysem THAT HAS NO POLICY. In return for a little freedom for the developer, think of what you gain. The user gains the choice to use what desktop environment they want. Developer gain the freedom to not have to worry if they are cutting of people by using the wrong DE. Commercial vendors gain the freedom to write applications to a desktop environment instead of just statically linking Motif.
  • by zorgon ( 66258 ) on Thursday August 24, 2000 @08:30AM (#830308) Homepage Journal
    Well no duh. Why do you think they call it "X" -- damn hard to spell that wrong.

    WWJD -- What Would Jimi Do?

  • by nathanh ( 1214 ) on Thursday August 24, 2000 @09:44AM (#830309) Homepage
    The networking code between the X client and the X server in XFree86 is a UNIX domain socket. This is possibly the fastest IPC method in Linux. The UNIX domain socket requires only one redundant copy and Linus Torvalds himself has optimised the all mighty crap out of it.

    Experts in XFree86 have already tried using other transports to see if things improve. This has in the past included a shared memory segment between the clients and the server. The surprise result was a reduction in speed. It seems that Linux has such a good implementation of UNIX domain sockets that doing it all by hand is an overall loss.

    Removing the transport altogether is impossible. This is not an X consideration. No matter what windowing system you have, at one stage you need to pass messages between the clients and the X server, because they are not the same binary and they do not run in the same address space.

    So ignore the pipe. The pipe isn't the problem. A real problem is context switching. Because the X server and the X client both run as user space processes the kernel must alternate the execution between client and server. This increases the latency of operations and time is wasted doing the context switching.

    One solution that can result in a speedup is to put the X server into kernel space. This saves you one redundant copy and two redundant context switches. It also means your system is now as stable as Microsoft Windows.

    The compromise solution is to put some highly timing critical code into the kernel but keep most of it in user space where it belongs. This is the technique that the DRI has used. It means the client can render directly to the hardware while still maintaining a balance between security and stability and clean design.

    SUMMARY: The real performance killer of X is not the pipe. Changing the transport has already been tried and has already failed.
  • by Jeffrey Baker ( 6191 ) on Thursday August 24, 2000 @08:06AM (#830310)
    A lot of the things Linux users have been beefing about are starting to come together. Check out the new veriosn of FreeType [], which includes 8-bit anti-aliasing and many nifty rendering features for many different types of fonts. See also the new experimental rendering engine for XFree86 here. [] Check out those translucent TWM windows. MMMM.
  • by AT ( 21754 ) on Thursday August 24, 2000 @08:28AM (#830311)
    Free86 takes the code that the X Consortium has developed and changes it in such a way as to make it x86 native

    Thats what they did originally. Lately, they've started applying useful patches to the clients and libraries from outside sources that may or may not every get into TOG's X(tm). For example, XFree includes the xterm patches from here [], added the essential XPM library, and beefed up Xaw to make it almost usable. Check out the release notes [] for more details.

    Even more recently, they've started to tackle the key features that hold X back, like font handling and transparency. Check the mailing list archive [] for the most recent developments.
  • by IntelliTubbie ( 29947 ) on Thursday August 24, 2000 @09:09AM (#830312)
    I'm not going to be one of those people who gripes that Slashdot is not Freshmeat, and therefore shouldn't announce software releases. (Hey, if it's News for Nerds, it's fair game.) However, could we try to have some explanations of exactly what this software is, what it does, and why this release is significant? Posters and editors should realize that this isn't the same audience as Freshmeat (i.e. not everyone compulsively keeps track of and updates every item of software in their system). And, I'm embarassed to say, "X11R6.5.1" doesn't mean much to me. C'mon guys: if we wanted plain vanilla software announcements, we'd read Freshmeat (and many of us do). So please, don't just announce the news -- report it.

  • by Benley ( 102665 ) on Thursday August 24, 2000 @08:18AM (#830313) Journal
    Plain and simple, this does not go on top of Xfree86. In fact, this release has nothing to do with Xfree86 at this point, until the Xfree86 people merge the changes into the xf86 4.0 tree and declare a new release.

    What we have here is a sample implementation, and not something that you want to use on your workstation. This will become useful once various releases of the X window system incorporate it, and then moreso when applications and toolkits are written to work with it.

  • by jimfulton ( 204820 ) on Thursday August 24, 2000 @03:10PM (#830314)
    There weren't really ten full releases prior to X11R1, however there were 10 incompatible revs of the protocol. Most of the early versions were primarily used within MIT (Athena and LCS) and friendly commercial R&D labs. Here's some of the pre-history based on cryptic notes and blurry memory: X1 - summer 1984 - the first version, based on a substantial rearchitecting of the UNIX port of the W Window System (originally developed for the V Kernel). X3 - fall 1984 - used internally at MIT as the initial basis of various plotting packages for coursework. X6 - spring 1985 - first version licensed by MIT to various companies (including Cognition, MASSCOMP, and Digital) for use in commercial products. It cost $100 and if you wanted you could stop off at the (very small) licensing office to pick up your own magtape. X8/X9 - fall 1985 - added color (X8 lasted all of about a week; X9 was quickly released to fix a protocol alignment problem that impacted ports on the IBM PC/RT). Many organizations began developing ports (including a version to the Lexidata 9000 display card for VAXen that was used at the Autofact tradeshow in late 1985 to show a prototype of the first 3rd party application: a mechanical engineering design system). X.V10R1 - spring 1986 - first version released by MIT that did not require signing a license agreement. Also the first version to have a DOS Xserver developed. X.V10R[234] - fall 1986 & spring 1987 - an explosion of ports done on a variety of platforms. X.V11R1 - Sep 15, 1987 - major overall done in collaboration with folks from Digital, Sun, IBM, and other companies. Formed the basis of core protocol used today. Companies and organizations releasing X-based products used this release as a starting port for incorporating into their own distributions. X.V11R2 - March 1, 1988 - first version released under the auspices of the newly-formed MIT X Consortium. The MIT X Consortium continued to put out releases of X11 for a number of years. Then in the mid-90s, it was spun off into a separate not-for-profit organization (simple the X Consortium). As has been noted, that eventually folded into various organizations that became X.ORG. The rest is history. :) Jim Fulton

If I had only known, I would have been a locksmith. -- Albert Einstein