Gnome 2.0 Alpha 1 Released 315
Dave H writes "The first pre-release of the GNOME 2 platform is now available!
Find it at you can grab it from FTP.gnome.org
It is of course a technology preview; note that it can't be installed alongside GNOME 1.x." There's some more information information posted on LinuxToday.
Love the warning (Score:5, Funny)
users.
That could be put on half or more of the stuff on my box.
Re:Love the warning (Score:3, Insightful)
Re:Love the warning (Score:2)
I don't know what's wrong with the writer's (a mr Andrew Orlowski) brain, but the article is full to the brim with stupid and mean comments in the same league:
"working with GNOME software has always been fun, if ultimately fruitless"
"[Gnome's] great gift to the world has been to spur development of the older, more established rival KDE"
"A visiting Martian would surely conclude that the GNOME Project has served its purpose"
"[KDE is] probably two years ahead now"
Makes me wonder what he's trying to accomplish. What's the purpose for a big, widely recognized site such as The Register to sell ad space with mean, stupid and uninformed statements such as these?
Backward compability. (Score:3, Interesting)
I know a couple of widgets from gtk1.2 is deprecated, CList is one of them. But will gnome 2 also include gtk1.2 or only gtk2.0.
And, does deprecated in the gtk2.0 case mean "not there" or "could disapear in the future"?
Re:Backward compability. (Score:1)
Re:Backward compability. (Score:5, Informative)
Deprecated means "will disappear in some future version, when not many people are using it anymore"
Re:Backward compability. (Score:2)
No new toys. :( (Score:1, Redundant)
Damn, KDE users are getting all sorts of new toys to play with, was hoping Gnome was gonna give me some too. :)
acm
Wow (Score:1)
Re:Wow (Score:1, Funny)
where's the race? KDE appears to be ahead 3 to 2.
I'm much more interested in seeing if RedHat (at 7.2) can catch Mandrake (up over 8 now)
Look is apparently the same (Score:4, Informative)
Re:Look is apparently the same (Score:3, Insightful)
ftp mirrors (Score:5, Informative)
Guess again. :-)
http://www.gnome.org/mirrors/ftpmirrors.php3
ftp://ftp.twoguys.org/GNOMEg /pub/GNOME/s es/gnome-2.0-lib-alpha1/
ftp://ftp3.sourceforge.net/pub/mirrors/gnome
ftp://ftp.rpmfind.net/linux/gnome.org/
ftp://ftp.sourceforge.net/pub/mirrors/gnome/
ftp://ftp.cse.buffalo.edu/pub/Gnome
ftp://ftp.yggdrasil.com/mirrors/site/ftp.gnome.or
ftp://ftp.sunet.se/pub/X11/GNOME/pre-gnome2/relea
Go fish! :-)
Re:ftp mirrors (Score:1)
As soon as the mirrors update, you can get the release from:
ftp://ftp.gnome.org/pub/gnome/pre-gnome2/releas
after you check out gnome2a1, check out kde3a1 (Score:4, Informative)
Date: Tue, 2 Oct 2001 17:22:16 +0200
From: Dirk Mueller
I delay alpha1 release until Friday to give us more time to fix and verify the recent regressions in KIO and khtml.
Also, there will be a kde 2.2.2 release soon, check http://developer.kde.org/development-versions/kde
Re:after you check out gnome2a1, check out kde3a1 (Score:2, Flamebait)
I love catching these kind of moderators in metamod. I hardly ever give anyone fair for modding someone down unless the person was obviously a troll. I usually leave it be, and if it's anyway iffy on the moderation I'll mark it unfair. Don't like this? Then fucking stop modding people down! Let them post at 1, just don't mod em up.
It pisses me off when people vote down a perfectly valid comment that has to do along the same lines as the story, just because it's not exactly on topic. Which is why I'm posting this as AC. Because I'd probably instantly moderated down to -1 offtopic (which I am, but hey). Normally I wouldn't care, but that affects my normal account and what I can post at.
GNOME, a thought (Score:2, Interesting)
There are alternative GUIs out there, for Linux & Unix - Berlin for example - but they're either not compatiable with X applications and/or the X protocol, or they're not mature enough to be usable.
Most Unix manufacturers go the other way. The sample X implementation may be broken, in many ways, but it's still a good place to start. So they write their own version of X, either from scratch, or using the sample X tapes as a starting point. This certainly produces a faster implementation, but it still doesn't tackle the complexity issue, and none of these are Open Source or Free Software.
IMHO, what's needed is a GUI that'll do for X what RISC architectures did for processors. Produce a MUCH simpler underlying architecture, using layers to provide more and more complex functionality.
How does this relate to GNOME, since that's where I started? Easy. Either GNOME or KDE is in a key position to write this "layered X", since they are projects sufficiently wide in scope to understand where bottlenecks and bugs creep in. Nobody else really has that kind of breadth of information.
Wouldn't it be better to pile effort into Berlin? There are too many problems with the approach taken. CORBA is known for horrible overheads, for example, and the CORBA implementation used is, AFAIK, not the same as the one used by either GNOME or KDE, which means a combined effort will require extensive rewriting.
This would make a nice op-ed at K5 (Score:1, Redundant)
Re:GNOME, a thought (Score:2, Insightful)
But isn't this exactly what X is? The X server is just a very dumb program that only knows how to draw lines, boxes, circles, and fonts. Everything else (i.e., the complexity) is layered on top of this through toolkits and window managers.
A GNOME program uses the simple GTK toolkit to provide the GUI. GTK uses Xlib which uses X. The complexity is layered.
Furthermore, neither the application nor the toolkit needs to worry about how the window is managed; this is taken care of by the window manager program. The window manager interacts with the user and moves, resizes, and iconifies windows. Layered complexity once again.
Re:GNOME, a thought (Score:2)
Maybe so, but then the question is "Why the heck is X so _huge_." I mean, come on, if you're going to write hundreds of thousands of lines of code then they should do something more than provide something so minimal that you need to write another hundred thousand lines of code to get a halfway decent interface.
Re:GNOME, a thought (Score:2, Interesting)
The idea that X is huge is greatly exaggerated. X itself isn't that large, but the total package looks much bigger than what you actually use because of the need for a zillion drivers. Yes, X could have a greatly simplified system that took much less code, but it would come at the expense of not being able to take advantage of the features in advanced graphics cards.
Re:GNOME, a thought (Score:5, Informative)
X was written from a frame buffer perspective, and had accelleration hacked in over time, until Mark Vojkovich developed a standard for it(XAA, iirc). Attempts to go towards a rendering pipeline are embodied in the excellent work in Xrender.
The drivers are all fairly minimal bits of code.. most of them rely on other modules to initiate standard display setting, etc.
Alot of the "cruft" in X is related to the I18N sctick that got hacked into R5 I think. More cruft comes from PEX (The long-dead competing standard to OpenGL), the horrible toolkit helper implementation known at Xt, the keyboard and colormaps (scarry). The seldom used XPrint and Xnest servers as well.
More cruft comes in with several implementations of frame buffering code implementations (fb, cfb, cfb16, cfb24, cfb32, mfb) XAA kinof added a layer below these original "drivers."
Also, there is a huge amount of interface code from X to toolkits such at gtk/qt. This code is mostly hidden in the X11 libs. Do a stack trace when drawing a button in GTK with X11 debugging on.. it is truly horrid (13 deep to draw a clipped line), and doesn't show the server side of the mess.
Also, X has a very syncronous rectangle management core. The server keeps a list of all viewable rectangles and updates the whole list after every rectangle update. (Slow window movement, anyone?)
The biggest problem with X is simply the fact that toolkits have been religated to client apps, instead of being loadable into the X server.
Often times core X developers argue that this is dangerous, and even say that client side apps are faster and are fixed in their minds that X is the only way to go. A huge chunk of code goes for all the abstraction(known as mutilation by code in my book) and platform independance.
By no means should we throw away all that knowledge, but it should be second tier to providing native interfaces IMHO. Larger processor caches and faster asyncronous graphics chips somewhat nullify this argument these days, but the fact remains that X would be alot faster without it.
In fact you're starting to see X as simple a pixmap display device in the end. All the toolkits are basically just blasting pixmaps into the server, because X can't handle much of the advanced graphics now anyhow.
Yet sitting down to a windows box is proof positive that X is slow. I'd say that a good rewrite would do X a world of good. Let applications communicate in terms of toolkit messages (add widget tree instead of get gc, 8 drawlines, 3 fills, and get font, set font, get colormap, set colormap, draw text).
Of course this could be *maybe* be done with an X extension, but there are a few limitations of what X extensions can perform without going and adding more hacks into the X server.
All in all, X11 is a fine piece of work. The work done in the past 2 years is fantastic to say the least. All the linux companies and the freetype, mesa, and DRI developers really deserve a major pat on the back. I really enjoy the engineering talent and ingenuity displayed by the XFree team.
Cleaning up X, or rewriting it would be a major step in the right direction.
A funny thing about windows, is that they have the opposite problem. Applications are often times tied _too_ closly to the GDI, and often break between versions. No doubt, a few graphics intensive applications from win31 would break on win2k.
Pan
Re:GNOME, a thought (Score:2)
Xnest is great, its perhaps one of the best things about X. If you don't know why, you don't know X.
Re:GNOME, a thought (Score:2)
Pan
X is not slow (Score:2)
Yet sitting down to a windows box is proof positive that X is slow.
Repeat after me:
X is not slow!
X is not slow!
X is not slow!
It is the toolkits that are built on top of X that are not tremendously fast, and in particular GTK+ and Qt (GTK+ seems somewhat worse than Qt in this respect but neither are examplary).
Proof:
Open up an application that uses one of the older, simpler toolkits such as Xt. A simple xterm perhaps, or xman, or xpaint. Enlightenment is also blazing fast. Play. See that X is in fact very, very fast indeed.
Now why is this? Why do the modern GUI toolkits appear to be slow?
Well, I think it comes down to optimization and architectural work. Both Qt and GTK+ are big libraries that attempt to do a great deal of work. But, for instance, neither of them use threads by default. Both use a technique known as an event loop to simulate threaded behaviour, but this is not ideal in terms of speed or efficiency.
Why do they not use threads? Because of cross-platform compatibility issues. Until very recently, FreeBSD's pthread implementation was thoroughly broken, and FreeBSD is a major target for both GTK+ and Qt. So, although Qt, for instance, has had its own thread API and the option of being threaded internally for some time (since qt 2), this has been switched off by default on all *nix platforms until FreeBSD got their act together.
Threading of the toolkits and the desktops and apps built around them will probably be the most significant single optimization to come, but there is other optimization work to be done too. Give it a little time, it will happen.
I'm sure I need not point out that the toolkits that sit atop the Windows GDI are, for the most part, pervasively multi-threaded, and this is where much of their perceived speed comes from.
But please do not blame X for the failings of the toolkits built on top of it. My (admittedly subjective) impression is that when blasting pure Xlib at X, it is at least as fast as raw GDI calls in Windows (see Xscreensaver vs. Windows screensavers for evidence of this).
Re:X is not slow (Score:2)
Yea, you can now fit the main X event loop and small applications into a processor's secondary cache. The major applications don't benifit from this, but more from faster busses and graphics chips. (Drawing is now a minor part of the time spend in X due to 2D acceleration)
Also, kiethp's reworking of the main event loop a couple of years ago was amazing.
My point was that architecturaly X encourages massive abstraction for client toolkits. Who would want to be tied to the color or font models X presents?
Older toolkits were designed for 68k processors - you're saying the equiv of open up windows 3.1 on a PIII. Enlightenment uses enormous amounts of pixmap copies - you are seeing X's good optimizations in SHM and protocol. Raster actually spends a good bit of time running test cases for optimization.
I will blame X for the failings of toolkits. The choice to delegate tookits to client-side is a failure that was realized years ago by most graphics programmers. News was a decent attempt to fix this, but went to far in aims and goals.
I think that the fear of loosing the few commercial applications that X has keeps X11 going as is. (Open source apps could easily be ported, slowly making more use of server side toolkits).
I don't want to deride X too much - it is a _very_ successful and usable windowing system. I just believe that it's time for X12.
Anyhow, one of these days I plan on putting my head where my mouth is. X is so modular now that it is probably very doable now. Alot more of the modules have good commentary and docs than ever before.
Pan
Re:GNOME, a thought (Score:2)
No, it's more than that. The drivers are a relatively small part of the X codebase.
Re:GNOME, a thought (Score:2)
Think about that for a second. I have a *Lisp* system that takes less disk space than 12M. That is a huge, huge, amount of code. Sure, it is less than the drivers, but considering that X does very little, it is positively *enormous*.
Re:GNOME, a thought (Score:2, Insightful)
Ok, but first of all this is stuff used by X clients, not the X server. These are things like libX11 and libXt, essential apis for X clients. Then you have the fact that alot of these apis have been pretty much depreciated for modern X programs. Gnome doesn't use Xintrinsics or the athena widgets. If you're running mostly gnome programs libXt and libXaw almost never loaded. Then there are things sitting around in there like libPEX. Don't even try to say you've used a PEX based program in the past 4 years. All in all, you're probably only going to use libX11, libICE, libSM, libXm and if you're using pretty antialiased fonts libXft and libXrender. Thats about four megs.
Re:GNOME, a thought (Score:2)
However, my vote goes for Berlin, using the GGI project's stuff. They project is concentrated on getting it right. Here are the reasons I believe it is better to support the Berlin project:
1) Better design. They are focused on doing it right. So many systems are focused on getting it done fast, and so few seem to worry about high quality. Yes, Berlin is slow in coming. But when it is ready, whenever that may be, it will be truly awesome.
2) Corba is not necessarily a bad thing. It depends on how it is used. Yes a Corba call is relatively expensive, but for things like graphics over the network (where such things are likely to matter the most) the number of calls is sufficiently small that compared to the X method of blasting bits across the network, things should actually improve. Also, remember that machines will continue to get faster. Overhead will be worthwhile for more flexiblity and power. And when the machines are there, Berlin will not have to be rewritten to take advantage of them.
Yes a lot of applications would have to be rewritten. But considering the potential benefits, and the fact that an X compatability layer is not out of the question (since both systems are open that's a big plus) make the future transition tolerable. Apple rewrote their graphical desktop, and released OSX. We can do the same. Only we won't have to run an entire classic environment. It can work. And when it does, Berlin will begin to redefine the desktop computer experience.
Re:GNOME, a thought (Score:2)
>>>>>>
Its that idea that makes Linux GUIs suck performance-wise. Power is rarely worth the tradeoff in speed and effieciency, since very little software ever exposes more power. Don't get fooled into equating features with power BTW. Power is being able to quickly do the work you need to do without the system getting in the way. Most of the stuff that these desktop environment developers think is power (network transparency, CORBA, etc) are really just mental masturbation and have little significance on the desktop.
Re:GNOME, a thought (Score:2)
To answer your sig: "That thud you just heard was all the former BeOS users throwing their PC's out the window..."
If you want to talk about efficiency in GUIs, BeOS has an awesome graphical system. (I used R3 through R5.) Too bad the company fscked up so bad. Oh well. We all know the command line is where the real power is. (GUIs are nice on laptops though. It just seems more appropriate to run a GUI.) Sorry for the off-topic post.
Re:GNOME, a thought (Score:2)
Re:GNOME, a thought (Score:2)
It seems similar -- GGI uses KGI, where I suppose DirectFB uses the framebuffer. The advantage being, I suppose, that the framebuffer is included in the main kernel where KGI has always been a patch.
But the problem with the framebuffer is that it is so darn slow. Perhaps reasonable in hardware that doesn't have any graphics acceleration (like on a handheld), but not useful on normal computers. I don't know if there is any real effort to ever make the framebuffer any faster -- the very name seems to imply non-accelerated simplicity.
I think the path away from X involves factoring the pieces better -- maybe that can even save X, as Xlib isn't really the problem, it's all the other half-assed crap that goes with X.
Re:GNOME, a thought (Score:2)
Re:GNOME, a thought (Score:5, Informative)
it's always hidden behind toolkits.
X doesn't have a drag-and-drop system, so I don't see how ROX could use it. DND is built on top as a custom protocol (Xdnd) shared by GTK, Qt, etc.
I would guess that ROX just uses Xdnd, isn't it GTK-based?
Berlin is far more complex than X.
Porting GNOME/KDE to Berlin would be infeasible, but said infeasibility would have nothing to do with different CORBA implementations.
Most UNIX vendors do not reimplement X, they are basically using the open source implementation with some minor tweaks. The open source implementation (primarily maintained by XFree these days) is generally more robust than the proprietary ones.
My observation of why X sucks (Score:2, Interesting)
it's always hidden behind toolkits.
I think the major flaw with X is not it's excessive resource usage, complexity or speed, but the fact that it has no standard toolkit.
While a lot of linux kids see the ability to use any toolkit (or even implement their own) as a good thing,
I see it as a huge hindrance to usability.
A user has to learn the different behaviours of GTK, Qt, Motif, Athena
and virtually countless others, all with their own looks, hotkeys and ways of doing things.
Aside from the "feel" the "look" of X will always be discordant, further slowing the already
confused or annoyed user down in a quagmire of gradients and chrome.
IMO, if linux (or any UNIX aside from OSX) is going to have any chance at the desktop market,
X either has to standardize and enforce a single toolkit, or be replaced by something more modern.
C-X C-S
DisplaySVG? (Score:2)
Re:GNOME, a thought (Score:3, Funny)
X-Windows: ...A mistake carried out to perfection. X-Windows: ...Dissatisfaction guaranteed. X-Windows: ...Don't get frustrated without it. X-Windows: ...Even your dog won't like it. X-Windows: ...Flaky and built to stay that way. X-Windows: ...Complex nonsolutions to simple nonproblems. X-Windows: ...Flawed beyond belief. X-Windows: ...Form follows malfunction. X-Windows: ...Garbage at your fingertips. X-Windows: ...Ignorance is our most important resource. X-Windows: ...It could be worse, but it'll take time. X-Windows: ...It could happen to you. X-Windows: ...Japan's secret weapon. X-Windows: ...Let it get in *your* way. X-Windows: ...Live the nightmare. X-Windows: ...More than enough rope. X-Windows: ...Never had it, never will. X-Windows: ...No hardware is safe. X-Windows: ...Power tools for power fools. X-Windows: ...Putting new limits on productivity. X-Windows: ...Simplicity made complex. X-Windows: ...The cutting edge of obsolescence. X-Windows: ...The art of incompetence. X-Windows: ...The defacto substandard. X-Windows: ...The first fully modular software disaster. X-Windows: ...The joke that kills. X-Windows: ...The problem for your problem. X-Windows: ...There's got to be a better way. X-Windows: ...Warn your friends about it. X-Windows: ...You'd better sit down. X-Windows: ...You'll envy the dead.
Copied from this page [catalog.com].It's already being done... (Score:2)
New ORB. (Score:5, Insightful)
BTW: Great Job on the multilingual!, as someone who likes to have his desktop in traditional chinese this is a big deal for me.
Re:New ORB. (Score:2)
How more simple can you get than a 3 line python random.org CORBA client?
GNOME Stability (Score:2, Insightful)
Dia is under GNOME/stable. gdk-pixbuf is under GNOME/unstable. Anyone see the problem here? Who in their right mind can call Dia "stable" when it relies on an "unstable" library?
Re:GNOME Stability (Score:3, Insightful)
Re:GNOME Stability (Score:2, Insightful)
You shouldn't really be looking in the stable/ unstable/ dirs in GNOME's ftp.
Re:GNOME Stability (Score:2)
Another thought... (Score:2, Interesting)
"But hardware != software", I hear some cry. Well, sorry to break it to you, but software is simply a simulation of hardware. There is nothing that you can do in software that you can't do in hardware. Faster.
Picture this - a graphics card that has a pure hardware implementation of XFree86 4.1, Gnome 2, and (just for the hell of it) KDE 2.2 as well. Nothing on the computer, the graphics is done entirely in silicon. This would free up much of the computer's RAM, unload much of the heavier cycle devourers, and produce one of the fastest GUIs on the planet.
"It wouldn't be free, though!"
Free as in free beer? No, it wouldn't, but if you want free beer, you're probably in the wrong place, anyway. You want the beer tent.
Free as in free speech? Why not? The hardware would need to follow GNOME, X and optionally KDE. X is the only non-free component of that. By having a re-implementation of it, you could make the hardware version totally free and totally unencumbered.
Yuck! (Score:2, Interesting)
Re:Yuck! (Score:2)
Implementing in hardware shouldn't be too bad. Since software equates to hardware, you should be able to simply treat the software as a "macro" of the hardware definition. This would give you a version 0.0.0, which your engineers can then run through VLSI emulators to turn into a 1.0.0 product.
Re:Yuck! (Score:2, Insightful)
Given the difficulties we have getting software to work correctly, do you honestly think hardware would be easier? Or even just as easy? Today's hardware only works because the specs are orders of magnitude simpler than even a mildly complex software system.
So you want to use an HDL for this along with a synthesis tool? For synthesis to work, one has to either design a fairly simple piece of hardware or write relatively low-level HDL. In the worst case the designer will essentially write out the netlist. Not to mention the inefficiencies introduced by synthesis. Full-custom design is usually much more efficient, but also much harder to do.
Re:Another thought... (Score:5, Insightful)
While it's true that hardware and software are essentially the same thing (a favorite rant of mine, BTW), it's not true that hardware is necessarily "better" than software, even in the speed department.
If we look at this proposal from a perspective of practicality, it clearly falls down. Hardware is incredibly difficult to debug and change. That is the beauty of software. The fact that complex computer architectures are implemented in terms of software (microcode) only points to this flexibility.
To address your speed claims, I point you to HP's Dynamo project. Dynamo is a dynamic translator for PA-RISC binaries. It is a software system that translates PA-RISC instructions to PA-RISC instructions at run-time. That doesn't seem to make much sense until you realize that the translation includes optimizations that can only be done at run-time. Binaries actually run faster under Dynamo than in native execution mode. By putting in a layer of software, HP was able to increase system speed.
One cannot do this in hardware because metal and silicon is fixed and FPGA's are too slow. Yes, people are researching reconfigurabler hardware, but that is for very specialized applications like DSP's, applications that are already used to boost graphics performance today.
A final observation: hardware gets much of its speed from parallelism. A ripple-carry adder runs much more slowly than a carry-lookahead adder. While certainly running at the speed of light (yeah, yeah, give or take) helps, parallelism (pipelining, O-O-O execution) is what got us the machine speeds we see today.
Parallelism is really, really hard to extract at the instruction level. Theoretically, it's there, but damned if I know how to get at it. Certainly lots of graphics routines have loads of parallelism. But guess what? We already have hardware to exploit it!
Modern GUI's really don't need to be much faster than they are now. We all like high framerates in our pretty games, but those are very specific applications. In fact, good hardware solutions already exist for them. I don't see RAM consumption as a problem, considering that X runs just fine on the iPAQ with room to spare. I have no idea what software you are running, but the CPU usage of graphics code is not even close to the largest consumer of cycles on my machine.
We already have good graphics hardware. Moving the X/GNOME/KDE control into hardware would gain almost nothing.
Re:Another thought... (Score:2, Interesting)
Re:Another thought... (Score:2)
By having everything (or as near to everything as physically possible) on silicon, you turn what is basically a serial stream of operations into one ultra-gigantic parallel process.
When you reach this point, there is no need for an "optimized" X server, as there's really no need for the computer to have any GUI code on it at all. All you'd do is generate X calls, and have the hardware take over from there.
Re:Another thought... (Score:2)
This is one parallelizable step.
Then, you have the problem of when one (or more) of those active processes themselves generates an X event. X must then pull that event into its event handler and farm it out appropriately.
Since a microprocessor can only deal with one thing at a time, what you have is the following:
Not too bad, you might think. But each of these requires a context switch, and those aren't cheap. Now, when you start throwing in GNOME stuff, it gets worse, as you have to bury your way through Gnome's libraries, the lower-level libraries, gtk/glib to Xlib, or vice versa. Again, each of these requires a context switch.
(One context switch = dump current state of current process; load in new process; set up registers for correct state)
Since, in a logical sense, there is no real difference between GNOME and X, context-wise, all those extra context switches are simply wasted computer resources.
As or DRI, I believe that requires kernel stuff, and there's not much kernel stuff there, yet. Once there's a decent set of drivers there, it might be meaningful.
Re:Another thought... (Score:2)
Let's say you have N active processes that are using X. Then, for every event, you must search a table of at least N entries to see which process that event is to go to.
>>>>>>>
Only if the developers are monkeys. Say you have a keypress event. The event automatically goes to the frontmost window, which is O(1). Say you have a timer event. If you keep the timer list sorted (which is pretty easy, its called a binary tree), then the lookup is quite cheap. Rarely should X have to use an O(n) algorithm, unless you want to visit every process anyway.
Then, you have the problem of when one (or more) of those active processes themselves generates an X event. X must then pull that event into its event handler and farm it out appropriately.
>>>>
Again, quite cheap. Say you want to send a message to another app. The destination is embedded in the message, so delivery is again O(1) (not counting copying of the message data!)
Since a microprocessor can only deal with one thing at a time, what you have is the following:
* Process fires event
* X receives event
* X directs event
* Process receives event
Not too bad, you might think. But each of these requires a context switch, and those aren't cheap.
>>>>>>>
Actually, events are buffered. So for every few dozen of these transactions, there are only two context switches. Say a process draws a thousand lines. Each one doesn't get sent to X. Instead, they all get put in a buffer, and when the app asks for sync, the buffer is sent to X as one big transaction. (I don't know if X specifically does it this way, but most window servers do).
Now, when you start throwing in GNOME stuff, it gets worse, as you have to bury your way through Gnome's libraries, the lower-level libraries, gtk/glib to Xlib, or vice versa. Again, each of these requires a context switch.
>>>>>>>
Except it doesn't. Library calls are just indirect function calls, and they are quite cheap. On a PII, an indirect function call takes less than 10 clock cycles.
(One context switch = dump current state of current process; load in new process; set up registers for correct state)
>>>>>>>>
Fortunately, context switches only happen a few hundred times per second (very rarely, by processor standards).
Since, in a logical sense, there is no real difference between GNOME and X, context-wise, all those extra context switches are simply wasted computer resources.
>>>>>>>
Except that GNOME is a library and X is a seperate process!
Re:Another thought... (Score:2)
Well, 4 clock cycles are required to load a long register, with the address, 4 are required to perform a long jump, which leaves you with 2 clock cycles to:
Something tells me that your model isn't, umm, 100% complete?
Then, we get to the library bit. Yes, much of the GNOME stuff is in libraries. But X is an event-driven model, not a sequential model. Thus, you have multiple threads going through your program, not just one.
Now, for the lookup bit. Yes, I know about binary trees. Actually, the "correct" implementation would use an n-ary tree, for much faster lookups. Binary trees simply take log2(N) to search. Which seems fine, until you realise just how many damn things X considers an event!
Then, you've got a secondary problem - sending an event to more than one application. It's certainly possible, which means that your check system can't just search the tree for the first hit, it has to search the tree for EVERY hit. Which reduces it to a linear search. The only advantage in using a tree, for these situations, is that you can get most of the important checks done quickly.
Let's now look at that line-drawing example. 1000 lines get drawn, all get buffered, until there's a sync. Hmmm. That's true, but not typically how it's done. To have smooth graphics, you need to have near-fixed intervals between updates, so you can't just hang around until some coder decides to sync things up. The usual way of coding is to double-buffer and do small amounts of updating at any given time interval.
Re:Another thought... (Score:2)
>>>>>>
I was assuming the code and data were cached.
with the address, 4 are required to perform a long jump
>>>>>
On what proc?
which leaves you with 2 clock cycles to:
Determine if the address is even in memory at the time
>>>>>
Done automatically by MMU during the memory access to get the code. Also, only happens if the instruction pointer crosses misses the TLB.
Load the information from swap, if it isn't
>>>>
Well, swap kills performance on any system. Why do you think KDE and GNOME require so much RAM?
Preserve the state of the calling function
>>>>
There's the bulk of the few clocks. Assuming the write is cached, its a piddly amount of data on x86 procs.
Swap out information as needed, in the event of the called function being swaped in
>>>>>
Again, swap isn't a factor here, since the code is assumed to be in RAM. Swapping will kill even the best code.
Something tells me that your model isn't, umm, 100% complete?
>>>>
Something tells me you should read up on the GCC function calling convention.
Then, we get to the library bit. Yes, much of the GNOME stuff is in libraries. But X is an event-driven model, not a sequential model. Thus, you have multiple threads going through your program, not just one.
>>>>>
I'm going to love the explanation for this one. What exactly are you talking about? You've got event driven, threading, and libraries all messed up. First, events are queued, so the cost of doing a context switch to deliver them is mitigated over several events. Second, threads have nothing to do with event handling. X isn't multithreaded, and neither GNOME nor KDE use threads. GTK+ and Qt aren't even fully thread-safe!
Now, for the lookup bit. Yes, I know about binary trees. Actually, the "correct" implementation would use an n-ary tree, for much faster lookups. Binary trees simply take log2(N) to search. Which seems fine, until you realise just how many damn things X considers an event!
>>>>>
Umm, methinks you're mistaken. If you were doing a lookup to deliver an event, you'd lookup the process ID to deliver the event to, not the ID of the event. (The ID would be even easier. Only 255 X events, you could use an array).
Then, you've got a secondary problem - sending an event to more than one application. It's certainly possible, which means that your check system can't just search the tree for the first hit, it has to search the tree for EVERY hit.
>>>>>>>>
Sending the same event to multiple apps should be rare. X events aren't a general communications protocol. Also, multicasting is rare for any messeging system.
Let's now look at that line-drawing example. 1000 lines get drawn, all get buffered, until there's a sync. Hmmm. That's true, but not typically how it's done.
>>>>
Except that IS how its done. BeOS's app_server does it. QNX's Photon does it. X probably does it too (I would hope so!)
To have smooth graphics, you need to have near-fixed intervals between updates, so you can't just hang around until some coder decides to sync things up. The usual way of coding is to double-buffer and do small amounts of updating at any given time interval.
>>>>>
First, the system can wait for either a number of drawing events, or sync, which ever comes first. Second, while you are right that there must be fixed-intervals, you have to realize that the intervals are REALLY long. The human eye cannot detect changes faster than 60Hz. That's about 16-17 milliseconds per frame, which is an awefully long time. Thus, the graphics server can afford to wait for a large buffer of drawing events and draw them together. Double buffering does something similar. You buffer all the changes to the framebuffer together, and you display them all at once 60 times per second (really slowly for a computer!)
Re:Another thought... (Score:2)
Okay, in the year 2025, someone comes out with a multi-trillion transitor chip dedicated to emulating 25 year old software, slower than any contemporanous chip. (25 years is probably optimistic for coding this in "pure" hardware.) I'm sure eveyone will be thrilled.
Do you understand the difference between CISC and RISC? The whole point of RISC is that dumping more and more stuff on the hardware isn't always the way to spend things up. RISC does a few things fast. Modern CISC chips, like the Pentium, are largely a CISC to RISC interpreter with a RISC core.
Remember TIGA ? (Score:2)
Re:Another thought... (Score:2)
If we were to put gnome-terminal and everything that it requires on a card, that card would hardly be a "graphics" card any longer. This would be a general purpose device with acces to (and thus assumptions about) all sorts of OS details. It would be managing its own IPC, creating devices in the filesystem, sockets on the network... it would be a BEAST.
Ok, let's just back off a second. *What* was the actuall proposal.... Well, someone wanted to speed up Gnome and remove some of the "bloat" by putting it in hardware.
A smaller subset of that is not only relatively easy, but quite desirable. An implementation of GDK (the abstraction layer that Gtk+ uses to talk to X or Windows or console graphics) in hardware would go a long way to eliminating the need for an X server entirely (the other pieces would be an Open/GL interface that Mesa could talk to, a set of window management primatives and a screen saver interface). This would be much more reasonable than putting an X Server in hardware, since it would provide a higher level interface, but at the same time you could still upgrade to the latest Gtk+ lib (assuming that its GDK supported the card's interfaces) and your version of Gnome would be totally independant of the card (except for the screen saver and window manager).
Such a device would give you much better Gnome performance, reduced footprint, a lack of need for the X server (in limited desktop environments where existing X applications were not needed) and an API that even Qt could be ported to (yes, Qt could be implemented on top of GDK).
Re:Another thought... (Score:2)
Re:Another thought... (Score:2)
Re:Another thought... (Score:2)
Re:Another thought... (Score:2)
Re:Another thought... (Score:2)
Re:Another thought... (Score:2)
why not just run X on a seperate, dedicated box?
You can run X applications on a seperate server, but the machine you're running needs to run X as well in order to do the hardware interaction.
Of course, certain parts of X can be moved to the other box, including the Font server, the window manager, etc.
Re:Another thought... (Score:2)
If you implemented this in raw hardware, you should get a sizable speedup, as there is a sizable amount of parallelism present. In software, you can only run one task per processor. In hardware, you can run one operation per link between gates.
Maintenance a nightmare? Oh, certainly. If anything, I'd say "nightmare" is too mild. This kind of stuff would give the average hardware guy post-traumatic stress disorder. I'm not joking, either. We're talking about VLSI on a scale far beyond anything that's been done.
More Detail? (Score:1)
Does/Will it have built in anti-aliasing? Is it considerably faster than 1.4? What is the main concern the GNOME development team is taking into consideration in regards to 2.0? Does anyone have any further information on it? The LinuxToday article doesn't really answer any of the questions alot of people are wanting to know.
Re:More Detail? (Score:2)
Otherwise, I don't know if anyone knows how speed will or will not improve since the core libraries are only just now getting their API's completely frozen. Apps will need to be fixed to use the new API's, then we'll see how it performs (and developers will be able to tune accordingly).
Re:Anti-aliasing (Score:2)
-1, dump it (Score:2)
Find it at you can grab it
Get rid of the "Find it at" and the second "information". Fix those and I'll vote it +1,FP!
GTK 2.0 (Score:2)
Compared to MS Windows, Gnome ROCKS!!! (Score:2)
go ahead an try and put windows into different layers on MS (Always on top?) Anyone who says that MS is easy to use just doesn't understand what's missing.
Product cycle... (Score:2)
Most things in linux have an incredibly short product cycle. While this means good things get to the public faster, it also discourages some developers. When you have a different libc, different toolkit API coming out every six months, it's hard to convince some people it is worth it to develop for. If you developed against Windows 95, for example, it still runs even without recompilation. Where were Linux systems back then? Everything about typical Linux systems has changed since then, from standard GUI toolkits (GTK and QT, don't think so..), desktop environments (Probably best you could do was CDE), to such fundamentals as the standard C library. Change is good, but in the world of Linux, the change is often done with little to no regard for running the programs of five minutes ago. Binary compatibility is flaky, and even the APIs have changed so drastically. These large projects need to give more thought to compatibility, rather than forcing people with GTK 1.2 apps to do rewrites for 2.0 rather than be left behind..
Re:Wow! (Score:1, Funny)
Re:Slower progress compared to KDE (Score:2, Informative)
Re:Slower progress compared to KDE (Score:3, Insightful)
This might not be an issue for the OpenSource community, but it sure is an issue for all the companies that have to make a living. This is why companies like SUN, HP etc. has chosen GNOME as their next desktop.
Just my two bits "01" - It's a fact, like it or not.
Re:Slower progress compared to KDE (Score:2)
Don't go getting the idea that Sun and HP hate the idea of non-free (or too-free) widget sets -- they kept CDE and Motif going far past the time when it should have been quietly taken out back and put down.
But that said, I hope all the money pouring into Gnome has a positive affect on the project. They are currently at the same stage that KDE were when they were changing to QT 2 -- 18 months between stable releases of the whole codebase. Gnome 2 will hopefully be as big a step in usability over 1 as KDE 2 was over KDE 1.
The UNIX software community needs healthy competition
Re:Slower progress compared to KDE (Score:2)
I thought that Trolltech had finally pulled their head out of their ass when they got away from the QPL...I guess not.
Ok, this I really don't get. Would you rather they stayed with QPL? Or perhaps you would rather they chose a different alternative license instead of GPL. Let my try to guess. I'll start by picking the obvious one. You want Qt to be BSD or similarly licensed so that you can develop closed source apps, or libraries to aid in someone else's closed source development.
No, of course not. You are immensely anti-closed-source. That's why you don't like the QPL. It's not compatible with GPL, which RMS liked to rant about, thus you dislike QPL. Fine, Qt is now GPL.
So which is it? What do you have a problem with? At least say something more obvious like "I don't want Trolltech making money" or "Down with the GPL!" or something. Right now I am confused. Perhaps I lost your point somewhere between my couch cushions.
#define PROGRESS productive_end_user (Score:4, Insightful)
Re:Not really... (Score:2)
2. Nautilus supports tab-completion (not that you need it since it tries to autocomplete anyways).
You could at least try using a recent version of the apps before you bitch about them.
Re:Frist Porst, GRR! (Score:1, Offtopic)
Re:A bit of thought on the evolution of the GNOME. (Score:1)
Of course I admire the GNOME team, but I'd like that their brains joined the KDE team.
They aren't enemies. They code for the same reasons so you are not doing anything bad joining KDE.
Re:A bit of thought on the evolution of the GNOME. (Score:5, Insightful)
By "most productive", did you mean "only"?
I promise you that those of us who refuse to use C++ do not do so out of ignorance. Quite the opposite, in fact: I don't use C++ precisely because I know more about it than you.
Re:A bit of thought on the evolution of the GNOME. (Score:3, Informative)
I've been using C++ since 1990! I helped port g++ v1.35 to the Atari Mega 4ST. I've followed the language evolution all the way till now. Many of my projects use C++.
Yet, many C++ projects that I see being done by other people are horribly misguided and doomed to failure. There are very good reasons to want to stick to C code!!!
Trolltech's QT lib is NOT one of them. For the most part, QT is ok.
--jeff
Re:A bit of thought on the evolution of the GNOME. (Score:3, Informative)
I'll do the short version.
#1 Most people who say they are C++ programmers actually are not properly educated in it, and have no or very little understanding of exception safety, const correctness, mutable, co-variant return types.
#2 Code re-use is a fallacy. Very often projects are factored way too much in the name of code reuse. What is important is GOOD DESIGN MEETING THE SPECIFICATIONS. Code re-use may or may not be part of that. When it is, it is a major thing. It does not come automatically because you typed 'class' instead of 'struct'.
#3 The C++ Fragile Base Class Problem. http://2f.ru/holy-wars/fbc.html
#4 C++ is a multi-paradigm language. Not only procedural, not only pseudo-OO, but generic programming too. Quite often the generic solution is the best solution under C++. I've never actually physically met more than two people who understood generic programming. sigh.
#5 Many C++ compilers just plain suck. You have to code for the lowest common denominator for the platforms that you are interested in.
#6 There is no (and can be no) standard binary API for C++ libraries. Other languages have a much harder time calling C++ libs than C libs.
--jeff
Re:A bit of thought on the evolution of the GNOME. (Score:2)
One of the unfortunate requirements with Qt is the ability to be compiled with VC++ 6. This alone causes problems with wanting a good design. I myself have found cygwin/mingw32 to finally be usable for all my win32 projects, so maybe now we can drop the 'lame compiler compatibility' requirements.
I think that the presence of the signal/slot preprocessor for Qt shows a fundamental problem with practical C++. I didn't say Qt was 'GOOD' I said for the most part it is OK. Better than Microsoft's MFC and Borland's VCL. Better than wxWindows. A real option for multiplatform apps.
--jeff
Re:A bit of thought on the evolution of the GNOME. (Score:2)
Also bear in mind that file managers like nautilus to an even greater extent the Windows XP version of Windows Explorer are becoming more and more like a document-centric operating environment in and of themselves (as opposed to the application-centric OS as a whole).
As it stands today, you can start Windows XP, maximize Windows Explorer, hide the taskbar, and still have a very functional OS. You can download pictures from a digital camera, edit them to a limited degree, burn CDs, browse the web, and do email all from within the file browser.
So don't discount the importance of a "half assed file manager". OS's are too set in stone to change the face of computing. Applications like the web browser (and now the file manager) grow from small incidental applications into robust environments that can change the way we use computers.
-Erik
Re:A bit of thought on the evolution of the GNOME. (Score:2, Insightful)
This is clearly false. Qt-free is very much GPL'd. I don't know what commercial implications you are talking about. There is absolutely nothing in the Qt license to prevent it from being ported to Windows. The only commercial implication I can think of is that the application compiled against it must be licensed under the GPL. But that doesn't seem to be a concern for you.
Re:A bit of thought on the evolution of the GNOME. (Score:2)
If all you're concerned about is free software, both are quite OK to use (from a legal and pro-free-software perspective). This was not always the case, but it is now.
Qt Free is as good as it gets (Score:2)
t is owned by Trolltech that sells more advanced versions of Qt. This means that if someone wanted to add new features to the free Qt, like for instance the ones included in the commercial versions, and Trolltech didn't like it, a new branch would need to be started.
Just to set the record straight:
Qt Free edition (licensed under either the GPL or the QPL, according to your taste) is identical in every way to the full Qt Enterprise edition that is Trolltech's premier commercial product.
Let me reiterate that: Qt Free edition is not cut down in any way whatsoever!
After all, why should it be? It is licensed only for Free software development, so it does not and cannot interfere with Trolltech's sales to commercial developers.
Thus your scenario of a Qt Free edition fork occurring due to people reimplementing features present in QT Enterprise edition will not happen - because there are no features to reimplement!
Re:GNOME==bloat (Score:1)
If you don't like Bloat, use Gnome apps in Afterstep or Windowmaker.
Re:GNOME==bloat (Score:2)
KDE just has way too many undocumented features that are hard to tweak - I use this stuff because I *like* to tweak things. Gnome *was* much sloppier than KDE, but has really caught up. When I finally realized I hate the desktop metaphor - windomaker doesn't need it, and I don't either - I switched back. It was around that time that I realized that I think the Gnome apps are way ahead. I've been using Gnumeric and I actually find it far easier to use than, say, Excel.
In the long run, it would be nice if their consitutent apps could run smoothly without loading the whole framework, if the background stuff (various little daemons) got loaded only when they are needed (KDE is moving away from this, Gnome towards), if someday they could settle on one sound daemon (I'm currently pitching for esound); personally, the cut-n-paste from X is about all I can see needing real soon...
Re:GNOME 2.0--wheres my Enlightenment (Score:2)
It's easier to download an enlightenment rpm and have it appear in the control panel than it is for me to download seperate gnome rpms for whatever window manager I want.
Re:Bloat? (Score:3, Insightful)
The latest Nautilus builds I have tried with the merges from the Red Hat guys have been AWESOME. Still needs some work, but it has come a long way towards becoming everyday usable.
Work with KDE instead of against it - Why all the hate between these 2 groups?
Why do people say this? I don't get it. I don't see Gnome developers trolling KDE, I don't see KDE developers trolling GNOME. What do I see? Users trolling for their preferred choice. Its not like KDE guys are making KDE apps not work in GNOME or vise versa.
Everyone talks about this huge war and how only one can survive
Re:Bloat? (Score:2)
This perceived dislike is simply not there. I'm afraid the media has eaten into your brain with their constant scandalizing and crying scandal even if there is none. Developers across the two camps get along very well (see pictures from expos etc
So if you have this pent-up need for perceiving dislike, perhaps you should go into politics, or co-hosting Jerry Springer or something instead
Re:Bloat? (Score:2, Informative)
You know you can use GIMP under KDE, and KDE apps under Gnome, right? It's amazing how many people don't. Yes, you need to install both sets of libraries. No, it isn't the end of the world to do so.
Re:Bloat? (Score:2)
>>>>>>>>>
There are two types of people in the world. Those that hate bloat, and GNOME/KDE users...
Seriously, though, there are more problems to loading GTK+ AND Qt than just bloat (which is still a terrible sin IMO). GNOME and KDE apps don't look the same. Some people (like me) just like everything to be nice and homogenous. My desk is perfectly neat and all my pens are in the perfect places. My tabletop has marks on it showing exactly where my speakers should go. Second, the two types of apps don't interoperate that well. In Windows, I'm used to embedding graphics in Word documents that are embedded in spreadsheets. In Linux, it just doesn't work that way (yet).
Re:Bloat? (Score:2, Insightful)
gtk is all blocky and and ugly looking, while qt is streamlined and smooth. if you're ever gonna convert all the windrones over to linux, it's gotta look pretty...
just my $0.02
Re:Have they dropped Nautalus? (Score:3, Insightful)
First:
Preferences --> Advanced
Then:
Preferences --> Windows & Desktop
and uncheck "Use Nautilus to draw the desktop"
Now, that wasn't so hard, was it? Don't knock a great piece of software (even though I rarely use filemanagers) just because you didn't read the docs.