Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GNOME GUI

Gnome 2.0 Alpha 1 Released 315

Dave H writes "The first pre-release of the GNOME 2 platform is now available! Find it at you can grab it from FTP.gnome.org It is of course a technology preview; note that it can't be installed alongside GNOME 1.x." There's some more information information posted on LinuxToday.
This discussion has been archived. No new comments can be posted.

Gnome 2.0 Alpha 1 Released

Comments Filter:
  • by wiredog ( 43288 ) on Wednesday October 03, 2001 @02:31PM (#2384555) Journal
    WARNING: This release does not include anything of use to end
    users.

    That could be put on half or more of the stuff on my box.

    • Which raises the question: Why is slashdot posting ALPHA releases? all this will lead to is a couple months from now people will comment "Yeah I tried Gnome 3 and it sucked."
    • According to The Register's article [theregister.co.uk] on the subject, the warning ("this release does not include anything of use to end users") "makes it consistent with all the previous versions of GNOME Desktop we've used."

      I don't know what's wrong with the writer's (a mr Andrew Orlowski) brain, but the article is full to the brim with stupid and mean comments in the same league:

      "working with GNOME software has always been fun, if ultimately fruitless"
      "[Gnome's] great gift to the world has been to spur development of the older, more established rival KDE"
      "A visiting Martian would surely conclude that the GNOME Project has served its purpose"
      "[KDE is] probably two years ahead now"

      Makes me wonder what he's trying to accomplish. What's the purpose for a big, widely recognized site such as The Register to sell ad space with mean, stupid and uninformed statements such as these?
  • by Lussarn ( 105276 ) on Wednesday October 03, 2001 @02:31PM (#2384559)
    Does anybody know about backward compability?


    I know a couple of widgets from gtk1.2 is deprecated, CList is one of them. But will gnome 2 also include gtk1.2 or only gtk2.0.


    And, does deprecated in the gtk2.0 case mean "not there" or "could disapear in the future"?

  • No new toys. :( (Score:1, Redundant)

    by acm ( 107375 )
    This release does not include anything of use to end users. It is a technology preview release of the development platform only. It is also not yet fully parallel installable with GNOME 1.

    Damn, KDE users are getting all sorts of new toys to play with, was hoping Gnome was gonna give me some too. :)

    acm

  • by Anonymous Coward
    I didn't think GNOME2 would ever see the light of day. It should be interesting race between KDE3 and GNOME2.
    • Re:Wow (Score:1, Funny)

      by Anonymous Coward
      interesting race between KDE3 and GNOME2


      where's the race? KDE appears to be ahead 3 to 2.

      I'm much more interested in seeing if RedHat (at 7.2) can catch Mandrake (up over 8 now)

  • by greenfly ( 40953 ) on Wednesday October 03, 2001 @02:32PM (#2384567)
    From what I can gather from reading the comments to the Linux Today article, the main things that have changed and the underlying libraries, nothing that would really change the look. So apparently a screenshot of this wouldn't really look any different from a screenshot of gnome 1.x.

  • ftp mirrors (Score:5, Informative)

    by richie2000 ( 159732 ) <rickard.olsson@gmail.com> on Wednesday October 03, 2001 @02:33PM (#2384583) Homepage Journal
    you can grab it from FTP.gnome.org

    Guess again. :-)

    http://www.gnome.org/mirrors/ftpmirrors.php3

    ftp://ftp.twoguys.org/GNOME
    ftp://ftp3.sourceforge.net/pub/mirrors/gnome
    ftp://ftp.rpmfind.net/linux/gnome.org/
    ftp://ftp.sourceforge.net/pub/mirrors/gnome/
    ftp://ftp.cse.buffalo.edu/pub/Gnome
    ftp://ftp.yggdrasil.com/mirrors/site/ftp.gnome.org /pub/GNOME/
    ftp://ftp.sunet.se/pub/X11/GNOME/pre-gnome2/releas es/gnome-2.0-lib-alpha1/

    Go fish! :-)

    • I guess you didn't read the article, did you?

      As soon as the mirrors update, you can get the release from:

      ftp://ftp.gnome.org/pub/gnome/pre-gnome2/release s/ gnome-2.0-lib-alpha1
  • by Anonymous Coward on Wednesday October 03, 2001 @02:39PM (#2384632)
    It will come out on this friday:

    Date: Tue, 2 Oct 2001 17:22:16 +0200
    From: Dirk Mueller

    I delay alpha1 release until Friday to give us more time to fix and verify the recent regressions in KIO and khtml.

    Also, there will be a kde 2.2.2 release soon, check http://developer.kde.org/development-versions/kde- 2.2.2-release-plan.html
  • GNOME, a thought (Score:2, Interesting)

    by jd ( 1658 )
    GNOME follows a nice concept, but suffers from one fatal flaw. X. X is a good system, in theory. It does many things other OS' GUIs only dream of. But it's over-designed and over-complex. So much so that AFAIK, of all the extensions, window managers, desktop environments, etc, out there, ROX is the only one to use X' own drag-and-drop system.


    There are alternative GUIs out there, for Linux & Unix - Berlin for example - but they're either not compatiable with X applications and/or the X protocol, or they're not mature enough to be usable.


    Most Unix manufacturers go the other way. The sample X implementation may be broken, in many ways, but it's still a good place to start. So they write their own version of X, either from scratch, or using the sample X tapes as a starting point. This certainly produces a faster implementation, but it still doesn't tackle the complexity issue, and none of these are Open Source or Free Software.


    IMHO, what's needed is a GUI that'll do for X what RISC architectures did for processors. Produce a MUCH simpler underlying architecture, using layers to provide more and more complex functionality.


    How does this relate to GNOME, since that's where I started? Easy. Either GNOME or KDE is in a key position to write this "layered X", since they are projects sufficiently wide in scope to understand where bottlenecks and bugs creep in. Nobody else really has that kind of breadth of information.


    Wouldn't it be better to pile effort into Berlin? There are too many problems with the approach taken. CORBA is known for horrible overheads, for example, and the CORBA implementation used is, AFAIK, not the same as the one used by either GNOME or KDE, which means a combined effort will require extensive rewriting.

    • by adadun ( 267785 )
      IMHO, what's needed is a GUI that'll do for X what RISC architectures did for processors. Produce a MUCH simpler underlying architecture, using layers to provide more and more complex functionality.

      But isn't this exactly what X is? The X server is just a very dumb program that only knows how to draw lines, boxes, circles, and fonts. Everything else (i.e., the complexity) is layered on top of this through toolkits and window managers.

      A GNOME program uses the simple GTK toolkit to provide the GUI. GTK uses Xlib which uses X. The complexity is layered.

      Furthermore, neither the application nor the toolkit needs to worry about how the window is managed; this is taken care of by the window manager program. The window manager interacts with the user and moves, resizes, and iconifies windows. Layered complexity once again.
      • But isn't this exactly what X is? The X server is just a very dumb program that only knows how to draw lines, boxes, circles, and fonts. Everything else (i.e., the complexity) is layered on top of this through toolkits and window managers.

        Maybe so, but then the question is "Why the heck is X so _huge_." I mean, come on, if you're going to write hundreds of thousands of lines of code then they should do something more than provide something so minimal that you need to write another hundred thousand lines of code to get a halfway decent interface.
        • Re:GNOME, a thought (Score:2, Interesting)

          by rgmoore ( 133276 )

          The idea that X is huge is greatly exaggerated. X itself isn't that large, but the total package looks much bigger than what you actually use because of the need for a zillion drivers. Yes, X could have a greatly simplified system that took much less code, but it would come at the expense of not being able to take advantage of the features in advanced graphics cards.

        • Re:GNOME, a thought (Score:5, Informative)

          by Panaflex ( 13191 ) <{moc.oohay} {ta} {ognidlaivivnoc}> on Wednesday October 03, 2001 @04:46PM (#2385469)
          I'm a newbie by standards around the X community, but alot of past work is devoted to old nasty things.. I've been lightly studying it for a few years, and have provided alpha ports for voodoo chips in the past.

          X was written from a frame buffer perspective, and had accelleration hacked in over time, until Mark Vojkovich developed a standard for it(XAA, iirc). Attempts to go towards a rendering pipeline are embodied in the excellent work in Xrender.

          The drivers are all fairly minimal bits of code.. most of them rely on other modules to initiate standard display setting, etc.

          Alot of the "cruft" in X is related to the I18N sctick that got hacked into R5 I think. More cruft comes from PEX (The long-dead competing standard to OpenGL), the horrible toolkit helper implementation known at Xt, the keyboard and colormaps (scarry). The seldom used XPrint and Xnest servers as well.

          More cruft comes in with several implementations of frame buffering code implementations (fb, cfb, cfb16, cfb24, cfb32, mfb) XAA kinof added a layer below these original "drivers."

          Also, there is a huge amount of interface code from X to toolkits such at gtk/qt. This code is mostly hidden in the X11 libs. Do a stack trace when drawing a button in GTK with X11 debugging on.. it is truly horrid (13 deep to draw a clipped line), and doesn't show the server side of the mess.

          Also, X has a very syncronous rectangle management core. The server keeps a list of all viewable rectangles and updates the whole list after every rectangle update. (Slow window movement, anyone?)

          The biggest problem with X is simply the fact that toolkits have been religated to client apps, instead of being loadable into the X server.

          Often times core X developers argue that this is dangerous, and even say that client side apps are faster and are fixed in their minds that X is the only way to go. A huge chunk of code goes for all the abstraction(known as mutilation by code in my book) and platform independance.

          By no means should we throw away all that knowledge, but it should be second tier to providing native interfaces IMHO. Larger processor caches and faster asyncronous graphics chips somewhat nullify this argument these days, but the fact remains that X would be alot faster without it.

          In fact you're starting to see X as simple a pixmap display device in the end. All the toolkits are basically just blasting pixmaps into the server, because X can't handle much of the advanced graphics now anyhow.

          Yet sitting down to a windows box is proof positive that X is slow. I'd say that a good rewrite would do X a world of good. Let applications communicate in terms of toolkit messages (add widget tree instead of get gc, 8 drawlines, 3 fills, and get font, set font, get colormap, set colormap, draw text).

          Of course this could be *maybe* be done with an X extension, but there are a few limitations of what X extensions can perform without going and adding more hacks into the X server.

          All in all, X11 is a fine piece of work. The work done in the past 2 years is fantastic to say the least. All the linux companies and the freetype, mesa, and DRI developers really deserve a major pat on the back. I really enjoy the engineering talent and ingenuity displayed by the XFree team.

          Cleaning up X, or rewriting it would be a major step in the right direction.

          A funny thing about windows, is that they have the opposite problem. Applications are often times tied _too_ closly to the GDI, and often break between versions. No doubt, a few graphics intensive applications from win31 would break on win2k.

          Pan
          • You sound like you know one or two things but calling "XNest" seldom used in the manner you do doesn't inspire confidence.

            Xnest is great, its perhaps one of the best things about X. If you don't know why, you don't know X.
          • Yet sitting down to a windows box is proof positive that X is slow.

            Repeat after me:

            X is not slow!
            X is not slow!
            X is not slow!

            It is the toolkits that are built on top of X that are not tremendously fast, and in particular GTK+ and Qt (GTK+ seems somewhat worse than Qt in this respect but neither are examplary).

            Proof:
            Open up an application that uses one of the older, simpler toolkits such as Xt. A simple xterm perhaps, or xman, or xpaint. Enlightenment is also blazing fast. Play. See that X is in fact very, very fast indeed.

            Now why is this? Why do the modern GUI toolkits appear to be slow?

            Well, I think it comes down to optimization and architectural work. Both Qt and GTK+ are big libraries that attempt to do a great deal of work. But, for instance, neither of them use threads by default. Both use a technique known as an event loop to simulate threaded behaviour, but this is not ideal in terms of speed or efficiency.

            Why do they not use threads? Because of cross-platform compatibility issues. Until very recently, FreeBSD's pthread implementation was thoroughly broken, and FreeBSD is a major target for both GTK+ and Qt. So, although Qt, for instance, has had its own thread API and the option of being threaded internally for some time (since qt 2), this has been switched off by default on all *nix platforms until FreeBSD got their act together.

            Threading of the toolkits and the desktops and apps built around them will probably be the most significant single optimization to come, but there is other optimization work to be done too. Give it a little time, it will happen.

            I'm sure I need not point out that the toolkits that sit atop the Windows GDI are, for the most part, pervasively multi-threaded, and this is where much of their perceived speed comes from.

            But please do not blame X for the failings of the toolkits built on top of it. My (admittedly subjective) impression is that when blasting pure Xlib at X, it is at least as fast as raw GDI calls in Windows (see Xscreensaver vs. Windows screensavers for evidence of this).

            • You must have missed my comment about larger caches and asyncronous graphics chips..

              Yea, you can now fit the main X event loop and small applications into a processor's secondary cache. The major applications don't benifit from this, but more from faster busses and graphics chips. (Drawing is now a minor part of the time spend in X due to 2D acceleration)

              Also, kiethp's reworking of the main event loop a couple of years ago was amazing.

              My point was that architecturaly X encourages massive abstraction for client toolkits. Who would want to be tied to the color or font models X presents?

              Older toolkits were designed for 68k processors - you're saying the equiv of open up windows 3.1 on a PIII. Enlightenment uses enormous amounts of pixmap copies - you are seeing X's good optimizations in SHM and protocol. Raster actually spends a good bit of time running test cases for optimization.

              I will blame X for the failings of toolkits. The choice to delegate tookits to client-side is a failure that was realized years ago by most graphics programmers. News was a decent attempt to fix this, but went to far in aims and goals.

              I think that the fear of loosing the few commercial applications that X has keeps X11 going as is. (Open source apps could easily be ported, slowly making more use of server side toolkits).

              I don't want to deride X too much - it is a _very_ successful and usable windowing system. I just believe that it's time for X12. ;-)

              Anyhow, one of these days I plan on putting my head where my mouth is. X is so modular now that it is probably very doable now. Alot more of the modules have good commentary and docs than ever before.

              Pan
    • Sounds like http://www.directfb.org/ may be what you're after. It doesn't have all of X's features by any means, but you might get there eventually.

      However, my vote goes for Berlin, using the GGI project's stuff. They project is concentrated on getting it right. Here are the reasons I believe it is better to support the Berlin project:

      1) Better design. They are focused on doing it right. So many systems are focused on getting it done fast, and so few seem to worry about high quality. Yes, Berlin is slow in coming. But when it is ready, whenever that may be, it will be truly awesome.

      2) Corba is not necessarily a bad thing. It depends on how it is used. Yes a Corba call is relatively expensive, but for things like graphics over the network (where such things are likely to matter the most) the number of calls is sufficiently small that compared to the X method of blasting bits across the network, things should actually improve. Also, remember that machines will continue to get faster. Overhead will be worthwhile for more flexiblity and power. And when the machines are there, Berlin will not have to be rewritten to take advantage of them.

      Yes a lot of applications would have to be rewritten. But considering the potential benefits, and the fact that an X compatability layer is not out of the question (since both systems are open that's a big plus) make the future transition tolerable. Apple rewrote their graphical desktop, and released OSX. We can do the same. Only we won't have to run an entire classic environment. It can work. And when it does, Berlin will begin to redefine the desktop computer experience.
      • Overhead will be worthwhile for more flexiblity and power. And when the machines are there, Berlin will not have to be rewritten to take advantage of them.
        >>>>>>
        Its that idea that makes Linux GUIs suck performance-wise. Power is rarely worth the tradeoff in speed and effieciency, since very little software ever exposes more power. Don't get fooled into equating features with power BTW. Power is being able to quickly do the work you need to do without the system getting in the way. Most of the stuff that these desktop environment developers think is power (network transparency, CORBA, etc) are really just mental masturbation and have little significance on the desktop.
        • To answer your sig: "That thud you just heard was all the former BeOS users throwing their PC's out the window..."

          If you want to talk about efficiency in GUIs, BeOS has an awesome graphical system. (I used R3 through R5.) Too bad the company fscked up so bad. Oh well. We all know the command line is where the real power is. (GUIs are nice on laptops though. It just seems more appropriate to run a GUI.) Sorry for the off-topic post.

          • So does QNX. Yea, it is bad that Be died. What really would have been cool would have been a mixture of the Linux kernel (which has gotten pretty nice these days) and the BeOS GUI. BeOS has problems with VM and networking, Linux has problems with GUI. A match made in heaven. Well, here's hoping that QNX open sources Photon...
      • Any idea how DirectFB relates to KGI [kgi-project.org] (and/or GGI [ggi-project.org])?

        It seems similar -- GGI uses KGI, where I suppose DirectFB uses the framebuffer. The advantage being, I suppose, that the framebuffer is included in the main kernel where KGI has always been a patch.

        But the problem with the framebuffer is that it is so darn slow. Perhaps reasonable in hardware that doesn't have any graphics acceleration (like on a handheld), but not useful on normal computers. I don't know if there is any real effort to ever make the framebuffer any faster -- the very name seems to imply non-accelerated simplicity.

        I think the path away from X involves factoring the pieces better -- maybe that can even save X, as Xlib isn't really the problem, it's all the other half-assed crap that goes with X.

    • Re:GNOME, a thought (Score:5, Informative)

      by Havoc Pennington ( 87913 ) on Wednesday October 03, 2001 @03:08PM (#2384843)
      X is very simple, for a windowing system, it's not complex at all. Plus no one has to see that stuff,
      it's always hidden behind toolkits.

      X doesn't have a drag-and-drop system, so I don't see how ROX could use it. DND is built on top as a custom protocol (Xdnd) shared by GTK, Qt, etc.
      I would guess that ROX just uses Xdnd, isn't it GTK-based?

      Berlin is far more complex than X.

      Porting GNOME/KDE to Berlin would be infeasible, but said infeasibility would have nothing to do with different CORBA implementations.

      Most UNIX vendors do not reimplement X, they are basically using the open source implementation with some minor tweaks. The open source implementation (primarily maintained by XFree these days) is generally more robust than the proprietary ones.
      • X is very simple, for a windowing system, it's not complex at all. Plus no one has to see that stuff,
        it's always hidden behind toolkits.


        I think the major flaw with X is not it's excessive resource usage, complexity or speed, but the fact that it has no standard toolkit.

        While a lot of linux kids see the ability to use any toolkit (or even implement their own) as a good thing,
        I see it as a huge hindrance to usability.
        A user has to learn the different behaviours of GTK, Qt, Motif, Athena
        and virtually countless others, all with their own looks, hotkeys and ways of doing things.
        Aside from the "feel" the "look" of X will always be discordant, further slowing the already
        confused or annoyed user down in a quagmire of gradients and chrome.

        IMO, if linux (or any UNIX aside from OSX) is going to have any chance at the desktop market,
        X either has to standardize and enforce a single toolkit, or be replaced by something more modern.

        C-X C-S
    • Hmm... how about "Display SVG" - like DisplayPostscript...

    • Hee hee...

      X-Windows: ...A mistake carried out to perfection. X-Windows: ...Dissatisfaction guaranteed. X-Windows: ...Don't get frustrated without it. X-Windows: ...Even your dog won't like it. X-Windows: ...Flaky and built to stay that way. X-Windows: ...Complex nonsolutions to simple nonproblems. X-Windows: ...Flawed beyond belief. X-Windows: ...Form follows malfunction. X-Windows: ...Garbage at your fingertips. X-Windows: ...Ignorance is our most important resource. X-Windows: ...It could be worse, but it'll take time. X-Windows: ...It could happen to you. X-Windows: ...Japan's secret weapon. X-Windows: ...Let it get in *your* way. X-Windows: ...Live the nightmare. X-Windows: ...More than enough rope. X-Windows: ...Never had it, never will. X-Windows: ...No hardware is safe. X-Windows: ...Power tools for power fools. X-Windows: ...Putting new limits on productivity. X-Windows: ...Simplicity made complex. X-Windows: ...The cutting edge of obsolescence. X-Windows: ...The art of incompetence. X-Windows: ...The defacto substandard. X-Windows: ...The first fully modular software disaster. X-Windows: ...The joke that kills. X-Windows: ...The problem for your problem. X-Windows: ...There's got to be a better way. X-Windows: ...Warn your friends about it. X-Windows: ...You'd better sit down. X-Windows: ...You'll envy the dead.

      Copied from this page [catalog.com].
    • The project you're thinking of is Rasterman's own Evas canvas project and the E17 that sits on top of it. http://www.enlightenment.org Yes, it will come back. :)
  • New ORB. (Score:5, Insightful)

    by sarkeizen ( 106737 ) on Wednesday October 03, 2001 @02:45PM (#2384689) Journal
    Anyone know if there's intent to implement some kind of simplified IPC? Similar to DCOP? I'm a CORBA developer and even I think that CORBA presents a fair ammount of work to perform some relatively simple things.

    BTW: Great Job on the multilingual!, as someone who likes to have his desktop in traditional chinese this is a big deal for me.
    • I'm not sure GNOME needs a new ORB at its heart. Orbit is easily sleek enough for general desktop use.

      How more simple can you get than a 3 line python random.org CORBA client?
  • GNOME Stability (Score:2, Insightful)

    by Arandir ( 19206 )
    I just ran across a GNOME problem not just ten minutes ago. I want to build Dia because argouml is insufficient and Rose sucks.

    Dia is under GNOME/stable. gdk-pixbuf is under GNOME/unstable. Anyone see the problem here? Who in their right mind can call Dia "stable" when it relies on an "unstable" library?
  • Another thought... (Score:2, Interesting)

    by jd ( 1658 )
    Now that GNOME 2 is approaching a point where the API (at least) is stable, it would be really interesting if a hardware manufacturer could take GNOME and produce a graphics card with GNOME support.


    "But hardware != software", I hear some cry. Well, sorry to break it to you, but software is simply a simulation of hardware. There is nothing that you can do in software that you can't do in hardware. Faster.


    Picture this - a graphics card that has a pure hardware implementation of XFree86 4.1, Gnome 2, and (just for the hell of it) KDE 2.2 as well. Nothing on the computer, the graphics is done entirely in silicon. This would free up much of the computer's RAM, unload much of the heavier cycle devourers, and produce one of the fastest GUIs on the planet.


    "It wouldn't be free, though!"


    Free as in free beer? No, it wouldn't, but if you want free beer, you're probably in the wrong place, anyway. You want the beer tent.


    Free as in free speech? Why not? The hardware would need to follow GNOME, X and optionally KDE. X is the only non-free component of that. By having a re-implementation of it, you could make the hardware version totally free and totally unencumbered.

    • Yuck! (Score:2, Interesting)

      by drodver ( 410899 )
      Problem is software has bugs, and the tolerence for hardware bugs is exremely low. What happens when a bug screws over said video card? Reboot! How long does it take before that will get old? Oh about 3 times before it goes out the window. Also, implementing these large,complex programs on hardware would be a nightmare, which means it would be expensive due to engineering costs.
      • by jd ( 1658 )
        What it would mean is that your engineers sit down and Get It Right. (Which is what programmers should do, but can afford not to.)


        Implementing in hardware shouldn't be too bad. Since software equates to hardware, you should be able to simply treat the software as a "macro" of the hardware definition. This would give you a version 0.0.0, which your engineers can then run through VLSI emulators to turn into a 1.0.0 product.

        • Re:Yuck! (Score:2, Insightful)

          by David Greene ( 463 )
          What it would mean is that your engineers sit down and Get It Right.

          Given the difficulties we have getting software to work correctly, do you honestly think hardware would be easier? Or even just as easy? Today's hardware only works because the specs are orders of magnitude simpler than even a mildly complex software system.

          Implementing in hardware shouldn't be too bad. Since software equates to hardware, you should be able to simply treat the software as a "macro" of the hardware definition. This would give you a version 0.0.0, which your engineers can then run through VLSI emulators to turn into a 1.0.0 product.

          So you want to use an HDL for this along with a synthesis tool? For synthesis to work, one has to either design a fairly simple piece of hardware or write relatively low-level HDL. In the worst case the designer will essentially write out the netlist. Not to mention the inefficiencies introduced by synthesis. Full-custom design is usually much more efficient, but also much harder to do.

    • by David Greene ( 463 ) on Wednesday October 03, 2001 @03:44PM (#2385078)
      Insightful? I've never questioned the drug-using habits of moderators before, but there's a first time for everything. :)
      "But hardware != software", I hear some cry. Well, sorry to break it to you, but software is simply a simulation of hardware. There is nothing that you can do in software that you can't do in hardware. Faster.

      While it's true that hardware and software are essentially the same thing (a favorite rant of mine, BTW), it's not true that hardware is necessarily "better" than software, even in the speed department.

      Picture this - a graphics card that has a pure hardware implementation of XFree86 4.1, Gnome 2, and (just for the hell of it) KDE 2.2 as well. Nothing on the computer, the graphics is done entirely in silicon.

      If we look at this proposal from a perspective of practicality, it clearly falls down. Hardware is incredibly difficult to debug and change. That is the beauty of software. The fact that complex computer architectures are implemented in terms of software (microcode) only points to this flexibility.

      To address your speed claims, I point you to HP's Dynamo project. Dynamo is a dynamic translator for PA-RISC binaries. It is a software system that translates PA-RISC instructions to PA-RISC instructions at run-time. That doesn't seem to make much sense until you realize that the translation includes optimizations that can only be done at run-time. Binaries actually run faster under Dynamo than in native execution mode. By putting in a layer of software, HP was able to increase system speed.

      One cannot do this in hardware because metal and silicon is fixed and FPGA's are too slow. Yes, people are researching reconfigurabler hardware, but that is for very specialized applications like DSP's, applications that are already used to boost graphics performance today.

      A final observation: hardware gets much of its speed from parallelism. A ripple-carry adder runs much more slowly than a carry-lookahead adder. While certainly running at the speed of light (yeah, yeah, give or take) helps, parallelism (pipelining, O-O-O execution) is what got us the machine speeds we see today.

      Parallelism is really, really hard to extract at the instruction level. Theoretically, it's there, but damned if I know how to get at it. Certainly lots of graphics routines have loads of parallelism. But guess what? We already have hardware to exploit it!

      This would free up much of the computer's RAM, unload much of the heavier cycle devourers, and produce one of the fastest GUIs on the planet.

      Modern GUI's really don't need to be much faster than they are now. We all like high framerates in our pretty games, but those are very specific applications. In fact, good hardware solutions already exist for them. I don't see RAM consumption as a problem, considering that X runs just fine on the iPAQ with room to spare. I have no idea what software you are running, but the CPU usage of graphics code is not even close to the largest consumer of cycles on my machine.

      We already have good graphics hardware. Moving the X/GNOME/KDE control into hardware would gain almost nothing.

    • What you are proposing does not make sense. X is simply for drawing primitives, and the whole point of an optimized X server is that it calls the hardware optimized functions of your graphics card to draw those primitives. I assume when you say that GNOME and KDE should be implemented in hardware you mean Gtk and Qt, since the majority of GNOME and KDE are non-graphical libraries.
      • No, I mean GNOME and KDE. The non-graphics parts still need accelerating, and in the end, a computation is a computation, regardless of what you want to use that end result for.


        By having everything (or as near to everything as physically possible) on silicon, you turn what is basically a serial stream of operations into one ultra-gigantic parallel process.


        When you reach this point, there is no need for an "optimized" X server, as there's really no need for the computer to have any GUI code on it at all. All you'd do is generate X calls, and have the hardware take over from there.

    • > Picture this - a graphics card that has a pure hardware implementation of XFree86 4.1, Gnome 2, and (just for the hell of it) KDE 2.2 as well.

      Okay, in the year 2025, someone comes out with a multi-trillion transitor chip dedicated to emulating 25 year old software, slower than any contemporanous chip. (25 years is probably optimistic for coding this in "pure" hardware.) I'm sure eveyone will be thrilled.

      Do you understand the difference between CISC and RISC? The whole point of RISC is that dumping more and more stuff on the hardware isn't always the way to spend things up. RISC does a few things fast. Modern CISC chips, like the Pentium, are largely a CISC to RISC interpreter with a RISC core.
    • TIGA Graphics card used to do this :) but they were not exactly cheap :)

    • Ok, there are other replies that indicate some moderators might have been on crack, but let's take this simple suggestion seriously for a second and see where it goes.

      If we were to put gnome-terminal and everything that it requires on a card, that card would hardly be a "graphics" card any longer. This would be a general purpose device with acces to (and thus assumptions about) all sorts of OS details. It would be managing its own IPC, creating devices in the filesystem, sockets on the network... it would be a BEAST.

      Ok, let's just back off a second. *What* was the actuall proposal.... Well, someone wanted to speed up Gnome and remove some of the "bloat" by putting it in hardware.

      A smaller subset of that is not only relatively easy, but quite desirable. An implementation of GDK (the abstraction layer that Gtk+ uses to talk to X or Windows or console graphics) in hardware would go a long way to eliminating the need for an X server entirely (the other pieces would be an Open/GL interface that Mesa could talk to, a set of window management primatives and a screen saver interface). This would be much more reasonable than putting an X Server in hardware, since it would provide a higher level interface, but at the same time you could still upgrade to the latest Gtk+ lib (assuming that its GDK supported the card's interfaces) and your version of Gnome would be totally independant of the card (except for the screen saver and window manager).

      Such a device would give you much better Gnome performance, reduced footprint, a lack of need for the X server (in limited desktop environments where existing X applications were not needed) and an API that even Qt could be ported to (yes, Qt could be implemented on top of GDK).

      • The DirectFB project [directfb.org] is already doing this. They've got a setup to do entity rendering directly into the hardware framebuffer. Look around the site and see for yourself. It looks pretty impressive and does pretty much exactly what you're saying if I'm reading you correctly. GTK+ runs right now on top of DirectFB as well as directly on the system frame buffer. Gotta love GDK!
    • Actually what you're thinking of is already here: E17. A hardware accelerated desktop. Runs fast, looks pretty. Mostly only the graphics routines need acceleration, since the other stuff doesn't take much processing. (If it does, you're using bad algorithms, and bad algorithms suck software or hardware rendered!) I only wish all apps could some how take advantage of E17's features. Raster should really think about maybe forking GNOME or something to create a E17 DE.

  • I'm excited that GNOME 2.0 has finally debuted, but what else has debuted along with it?

    Does/Will it have built in anti-aliasing? Is it considerably faster than 1.4? What is the main concern the GNOME development team is taking into consideration in regards to 2.0? Does anyone have any further information on it? The LinuxToday article doesn't really answer any of the questions alot of people are wanting to know.
    • From everything I've read, the biggest user visible change in GNOME 2.0 will be that they are using Gtk+ 2.0. Gtk 2.0 includes anti-aliasing and lots of other fun features (look at the section about Pango at www.gtk.org).

      Otherwise, I don't know if anyone knows how speed will or will not improve since the core libraries are only just now getting their API's completely frozen. Apps will need to be fixed to use the new API's, then we'll see how it performs (and developers will be able to tune accordingly).
  • There're some redundancies in the story.

    Find it at you can grab it ...There's some more information information

    Get rid of the "Find it at" and the second "information". Fix those and I'll vote it +1,FP!

  • Theyw ere talking about enhanced transparency support in GTK 2.0. Does anyone know if that got in there? I've got a great idea for a smoked glass GTK theme that was impossible to implement in GTK 1.2.
  • I got trained on XP today at my company, (I do Tech support for an ISP)...they were showing us the new "luna" interface. I acidentally asked what window managers you can choose from, and if there were more than one desktop environment you could run..... DUH, this is MS, it's thier way or the highway.

    go ahead an try and put windows into different layers on MS (Always on top?) Anyone who says that MS is easy to use just doesn't understand what's missing.
  • It's cool to see this starting to come to fruition, but there are problems that we need to keep in mind.

    Most things in linux have an incredibly short product cycle. While this means good things get to the public faster, it also discourages some developers. When you have a different libc, different toolkit API coming out every six months, it's hard to convince some people it is worth it to develop for. If you developed against Windows 95, for example, it still runs even without recompilation. Where were Linux systems back then? Everything about typical Linux systems has changed since then, from standard GUI toolkits (GTK and QT, don't think so..), desktop environments (Probably best you could do was CDE), to such fundamentals as the standard C library. Change is good, but in the world of Linux, the change is often done with little to no regard for running the programs of five minutes ago. Binary compatibility is flaky, and even the APIs have changed so drastically. These large projects need to give more thought to compatibility, rather than forcing people with GTK 1.2 apps to do rewrites for 2.0 rather than be left behind..

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...