Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
X GUI

XFree86 4.1.0 Reviewed 149

Patrick Mullen writes "The Duke of URL has just posted their review of XFree86 4.1.0. The review covers its new features, the fixes since 4.0.3, performance (2D and 3D) and even takes a look at what ground has been made in ATI, NVIDIA, 3dfx, and Matrox's drivers." Compares performance to windows where applicable and to X403. Looks like the speed gains are real. Hope it gets put into sid soon for us apt junkies.
This discussion has been archived. No new comments can be posted.

XFree86 4.1.0 Reviewed

Comments Filter:
  • by Anonymous Coward
    DO NOT USE THESE UNLESS YOU READ THIS FIRST!!!

    for whatever reasons, the debian packages for installing 4.1.0 have paradoxical dependancies. The packages require certain other packages that if you attempt to install them, will break your system due to the very NON-failsafe method of *.deb's. Please keep this in mind, and use on the several utilities to back up your existing system. (Yes I know that is rather large and general). Specifically, back up 'ldconfig' and all the libc# related packages, or just the libs themselves. (libc6, glibc, (the ++ versions too), as well as all those that depend on these here) But also it would be wise to go through and run the download only command for every package, then run the command to print out the dependancies first, in order to allow you to properly back the rest up.

    Now, for the bitching. The reason for this is not any of the code or libs. It is the deb packages. For some reason (probably overworked and understaffed) the maintainers are not acurately checking the packages. Many have complained about the 'Catch 22' type of situation that arises from attempting to install all of the needed libs and such for upgrading (or initially getting) to the new XFree86. (although this was mainly with 4.0.2,3 and 4) Please if you are a maintainer, remember the whole point to the packages. If they require such a long list of upgrades based on a very sub micro-minor version update, then redo it. Furthermore, allow for proper roll back and only overright ANY portion of the libs AFTER the new one is completely installed AND CONFIGURED!

    Right now, it is not only easier but healthier (both to your system and mental health) to install binaries or compile. As for you coders, don't just include stuff because it is there and you think it is 'cool'. Otherwise, you force everyone to upgrade from 13.5.97.3.21 to 13.5.97.3.22 for no real reason.

  • by Anonymous Coward
    Taco be a real man and install X the real way.

    You mysogenic bastard. So all "real" linux users are men? Those "silly little girls" wouldn't be able to figure out the Mighty Command Line? Yes, Be A Man, Taco! Do something that will create pain for yourself instead of waiting for the easy way!

    Sexist shit.

  • I demand you eat my bait. Suck it down boy, and lick that hook clean.
  • by Anonymous Coward on Thursday June 14, 2001 @06:07AM (#151579)
    Lately we've seen a new mature version of KDE (2.1), and new version of GNOME (1.4), a new kernel (2.4), and now Xfree86. With all this, I'm getting really temped to upgrade. The wise will wait for GCC 3.0 though. The new C++ ABI is going to break stuff everywhere, so a clean install will be recommended. It's getting very hard to wait though.
  • Oops. My bad. That blows, dude.

    Maybe you should try debian? (joke, joke...)

    --
    Forget Napster. Why not really break the law?

  • Ideological principles only go so far. Why not try out the closed-source drivers, or are you afraid they're relaying your every mouse-click back to nVidia's secret underground headquarters?

    --
    Forget Napster. Why not really break the law?

  • Just FYI. From the mailing list, I believe it could happen.
  • And it is the first GCC (maybe the first released compiler) to do so, so I'd say GCC 3.0 has a lot to do with it.
  • Someone mentioned on the dri-devel list that the modules that ship with XFree 4.1 are newer than any that ship with any kernel.
  • UT runs fine for me.
    Once I got it working that is. DRI gave me a lot a grief but I finally got 3D working properly and can snipe in CTF-Face all day if I want. And sometimes do, although last-man-standing is my fave at the moment.
    BP6 w/ 2x533 and a G400Max. I never need to create a dedicated login or stop any processes except the obvious ones like xine or oggenc.

    Did the dedicated login back when I had a Cyrix166 trying to run CivCTP at 1280x1024.
  • Before you dismiss my answer, read this comment through.

    Answer: Frame Buffer.

    Why is that the answer?
    The Linux kernel frame buffer currently will do not only standard graphic mapping, but will also do accelerated function on certain cards, for example, Matrox cards.

    At my last check, not all accelerated functions were implimented, and there was a push for a _common_ set of kernel functions so that accelerated features for all video cards would work in a common fassion, which, for example, XFree would use. This is why XFree still works better with a card driver, rather than the fbdev driver.

    And, if your not satisfied with XFree still, qt embed uses the linux fb, or, you could learn c and/or c++, and be able to form a better opinion on what needs to be done, and maybe even help out.

  • Hope it gets put into sid soon for us apt junkies.

    It may not be in Sid yet, but there are packages out there for Woody.

    deb http://people.debian.org/%7Ebranden/ woody/i386/
    deb-src http://people.debian.org/%7Ebranden/ woody/source/

  • has anyone ever sat at their PC and said "Oh my, the window redraw rate in this is slowing me down..."?

    Yes. When I first upgraded one of my machines to an S3 accelerated card, the non-accelerated ones seemed unbearably slow. However, even today, an opaque move of a large window can cause the machine to become less responsive. If the X server can help minimize this, so much the better.

  • I don't think so. The length of a jiffy (the timer interrupt) in Linux is 10ms. However, the quantum is 50ms. Most processes are not preempted within the 10ms jiffy.

    If there is no other process to run (with a higher priority) of course the kernel will let the current process run its full timeslice. However, any process that needs attention and has a higher priority than the current running one preempts it and gets to run (within 10-20ms). Scheduling quantum doesn't matter at this point. If you push up the HZ counter to 1000 (1ms timer interrupt) you can pretty much guarantee a soft-realtime process that needs attention every few ms to run correctly , assuming you make sure it has a higher priority than normal. You will notice that properly written apps do exactly this.

    That's my point. QNX and BeOS both run in userspace, just like X, and run a good deal faster.

    "run a good deal faster", and what runs a good deal faster? The BeOS UI is certainly much slower on my box these days, but I suspect that has something to do with the AMD-challenged optimizations in the Be kernel. Haven't tried QNX recently. Point is that with properly prioritized processes you can make Linux just as responsive as for example BeOS. The advantage BeOS has is that it does this automatically for you, it's a single user OS, with no security. (I'll take multi-user Linux over it anyday though!)

    t they are not transparent to applications

    What do you mean, not transparent?

    Umm, the BeOS messaging system can shunt 90,000 messages per second around the system (on a PII 300). Nothing on UNIX is anywhere *near* that number. Even QNX can't do above 40K.

    Where are you getting these numbers from? And what constitutes sending a message?

    -adnans
  • by Adnans ( 2862 ) on Thursday June 14, 2001 @01:54PM (#151590) Homepage Journal
    1) X runs on UNIX. Unicies are almost always server-oriented systems, and tend to have very short thread quantums. For example, the quantum on Linux 2.4 is 50ms (down from 100+ on 2.2).

    This is simply not true. Both your 2.2 and 2.4 numbers are dead wrong. Linux on x86 has always had a timeslice of 10ms. It has always been 1ms on 64-bit Linux platforms (Alpha). BTW, you can modify the timeslice very easily by editting /usr/src/linux/include/asm/param.h and setting the HZ define at 1000. Yes, the timeslice is mostly dependant on this single define. You will notice that for x86 it is at 100 by default. To get the timeslice you simple divide 1000ms by the HZ value. So for 2.2 and 2.4 you get 1000 / 100 = 10ms. I have a standard patch that's applied to all fresh kernels that put HZ at 1000 on my boxes. It's kind of ludicrous to have 10ms timeslices on a 1.4GHZ Thunderbird *g*. Oh, and if you need smaller timeslices, witout having to modify your kernel lookup the manpage of sched_setscheduler

    show much improved access times, even when the GUI is in a userspace server (as in BeOS or QNX)

    X is in userpace.

    2) It's badly designed

    The design is about 20 years old, and still going strong. The developer didn't have the hindsight of what hardware would be developed over the years. Luckily enough they tought of X extensions. Oh wait, X extensions are bad right? Don't tell that to the Xv and RENDER extension that are taking full advantage of my cutting edge NVidia GPU!!

    X uses the much more general (and much slower) UNIX domain sockets

    Local sockets are really fast (and very low latency). For large transfers X uses shared memory anyway. And thanks to XAA the amount of communication is kept to a minimum.

    ...when GUIs like Photon (on QNX) implement all the features of X plus more in less than a meg, one has to fault elements of X's design

    Try TinyX. Your arguments, while true to some extend, are really not convincing enough to call X "badly designed". You are using outdated facts to draw conclusions. X is here to stay. Whining about it is not going to make it less useful. You could spend your time better by helping out Be and BeOS, be-fan. A 3ms timeslice doesn't do me any good if it doesn't boot on my box. Too bad the juicy parts are closed source no??!

    Oh, I finally decided to put my BeBox in long term storage. Perhaps in 20 years it will fetch a nice price. I'm betting it'll bootup witout too much trouble, assuming I can still find a CMOS battery that fits.

    -adnans
  • I'm amazed that there's just one obscure comment about support for antialiased fonts with this release! Better support for antialiasing would be a huge reason to upgrade in itself.

    Anyone have a summary of how easy it is to use this feature?

  • Get the more or less official debian sid packages here:

    http://people.debian.org/~branden/woody/
  • Sorry for off topic post, but does anyone know what's the deal with Debian releases these days? When do you think woody would be released?

    Also, can anyone share their experience running woody? I'm currently using potato on my firewall. Is it safe to upgrade? How about using it on a workstation?

    tia
    ___
  • Are you accusing this bastard of resembling a Japanese soup, or do you mean "misogynistic?"


    --
  • Yeah, but we may never get hardware OpenGL in BeOS.
  • Both quake1 and quake2 didn't use the native opengl junk that now comes with X, because it didn't exist yet.

    Quake3 does. Quake 3 runs GREAT on my linux box. In fact, it's pretty much the whole reason why I bought this box. :) GF2 GTS card 32 meg, usb mouse, 1.33ghz athlon 266mhz ddr sdram runs quake3 at about 100fps. I had to turn off turbo mode in my bios agp and change some flags in my nvdriver to make it stable, which cost me about 20fps. But hey, 100fps is just fine for me. I can start up quake3 whenever and it's smooth as win2k. And I typically have window2 maker, plus 6 retarded dock apps, mozilla, licq, a few wterms, xchat, xqf, maybe mysql_navigator, whatever and quake3 runs fine. Running X 4.0.3 on a Debian sid box.

    I figure I'll just wait until X 4.1.0 gets put into sid (unstable). Kinda funny how unstable doesn't mean unstable. :) The big packages like X always go through decent testing before hand. I think someone posted the pre-release packages above. Not 4.1.0 is gonna help my nvidia card out any. But I haven't really had a complaint about the 4.x tree of XFree86 yet.

    And no, you obviously don't need such a beastly box as my main game machine to run quake3, my p3-600 v3 runs it respectably (samn distro setup). I'm really looking forward to seeing the XFree86 4.1.0 v3 improvements with this box. :) I currently have to turn off some of the bells and whistles, running it in 800x600 in 16 bit color with vertex lighting to get what I feel a playable frame rate out of it.

    I'm kinda pick when it comes to frame rate though. But not nearly as bad as some people I know. :)

    ---

  • If that isn't your problem

    It's not, unfortunately - I've got the same problem with my Diamond Monster Fusion AGP (as does the reporter of the bug in DRI's sourceforge buglist, though he's got the PCI version). The Banshee Bug has been in the DRI since at least early April's CVS.

    I'm hoping they get it fixed soon - I've been itching for XVideo support, which HAS been fixed (but I can't use because of the banshee bug corrupting everything...). The bug was assigned a while back now - my guess is it'll be fixed soon (the fix may be delayed because the DRI team is evidently busy moving to Mesa 3.5...)

    Or, spend the 30$ and get a voodoo3...

    If I can manage to find a Voodoo3 AGP locally for only about $30, I may just do that...


    ---
  • You just did, you have filled a bug report in, havent you?

    For what it's worth, this bug has been in DRI's sourceforge bug database for about a month and a half now (I went to report it myself a couple of weeks ago and found someone else already had, so I just added what little extra info I had about it to the discussion...)

    Oh, and if you go to look it up, I'm not the one who reported it as "WTF ARE YOU GUYS THINKING?!?!", so no flames, please...the bug report can be found here [sourceforge.net].
    ---

  • Moving certain critical _parts_ of XFree into the kernel might be a win, like the graphics drivers. After all, why the heck should a user space application like XFree be doing PCI management?

    The X Window system is not a Linux-only application. For portability's sake alone, putting X components in the kernel is "A Bad Idea". Sure, there's os-specific code in X, but you shouldn't need a particular version of a kernel to run a particular version of X. Don't even get me started about DRI. ARGH!!!!

    I agree with the rest of your post though... I personally think we need LESS stuff in the kernel. Anyone who wants to write device drivers should be made to chant over and over: "If it doesn't need to be put in the kernel, don't put it in the kernel!"
  • well, there was that time in between when I sold the cheap piece of junk s3 PCI card that came with my (then) new Pentium 133 to my dad who had just upgraded from a 386 to a p133 and hadn't yet recieved the matrox card I ordered... I was forced to use the ISA card form the old 386 and realized that it wasn't the 386 processor that was slowing me down back then after all...

  • I recently upgraded to 4.1.0, and now I get "snow" on my screen whenever I scroll. It goes away when I stop scrolling. 4.0.3 was fine. Does anyone else have this problem?

  • Hope it gets put into sid soon for us apt junkies.

    Why wait? Do it like the rest of the world! It's easy and fun! Like so:

    > sh Xinstall.sh

    Then you don't have to be at the mecry of package maintainers. You'll have control over your own system, and in no time you'll be livin' la vida Libertarian.

    ;-)

    -B

  • once you try debian, you don't wanna go back to any other distro

    I'd rather use my own version of Linux. And I have my own method of uncrufting my system, and it's worked for me for the last six years or so. It also works where apt won't: when the software doesn't come in a package. If I want to install it, I can, and I'm not bound by someone else's rules about what a package is and isn't, what it has in it, etc. In fact, that's what I really like about Linux: I get to make the decisions. Same as Taco and yourself -- it's all good. And if Taco was willing, he could try the new version of X. That was my only (tongue-in-cheek) point.

    Anyway, millions of Linux users can't be all wrong. Whether it's Debian of Storm or even Mandrake it's still Linux.

    -B

  • In my experience XF86 v4.0.x does not like Voodoo cards. I've tried it with a Diamond Monster Fusion and a 3dfx Voodoo 4 and both had stability issues and leaked memory all over the place. It would crash when playing video, loading a large Star Office file, reading /. in threaded mode, etc. Not only that, but it was overwriting virtual console buffers: Corrupted character maps, inverted video when I hit Ctrl+Alt+F1. Touching the keyboard when in console mode also tended to crash X.

    I finally gave up and bought a GeForce 2.

    Maybe the 4.1 driver is better. Good luck!
  • Thanks for the tip. If I resurrerct that config for a spare box I'll add an exhaust fan.
  • I made the silly mistake that clueful people reading my comment would understand that I was referring to 4.1

    I just installed 4.1 on my G400 (yes, yes... but it's close, and I'm curious if you had the same problem)...

    I had never been able to enable antialiasing on both heads; the primary looked okay, but the second was screwed up. Now, with 4.1, both are "sort of there"... large rectangular blocks of text are simply missing, but if I highlight them in Konqueror, they reappear, and stay there when I remove the highlighting. A Control-A in each web page seems to fix it.

    It's a shame - it's just so darn *pretty* to use these antialiased fonts, but I just can't get aa (or GL) to work if I'm using dual-heads (GL *does* work just fine if I drop it to using just a single head, but as soon as I reference that second head in XF86Config, GL dies).

    Bah... I wish I had the time to scroll slowly through logfiles... Grrrr... I didn't have time to fix it then, and I won't have time now, dammit.

    --
    Evan

  • by vs ( 21446 )
    *sigh* Tdfx is br0ken, I have to stay with 4.0.3 for OpenGL on FreeBSD.
  • by vs ( 21446 )
    Then you should stick with 4.0.3. It works in there, but remember you have to build X from source and throw in the two Glide-lines into FreeBSD.cf.
  • by vs ( 21446 )
    The Voodoo3/AGP works fine in 4.0.3 (e.g. glclock, gltron).
  • There are a couple major problems with letting X do all the graphics driver management, instead of placing minimal graphics support in the kernel.

    The first issue is functionality. There is no standard way to write a graphics program under Linux currently that doesn't have to have X under it. This is a big problem for me and other graphics happy programmers. It turns out that X11 has a terrible interface for high speed graphics. Yes, there are a couple drivers here and there that can use the FBDev, but thats not accelerated. I want FAST 2D alpha BLTS under Linux. And I dont want to have X in my way.

    The other problem is stability. I dont care how good X is, it still crashes, and when it does it takes the rest of the system down with it. Sure, if you have another computer handy you can telnet in and fix the problem, but thats hardly something to expect your mom to do. The basic problem is that a kernel should manage resources. Thats its only job. And the video card is a pretty big resource for a desktop computer. The idea that the kernel has no way of returning a video card to a usable state when a program tanks and leavs you in graphics mode is just dumb.

    Now, dont get me wrong. I dont want to see X11 linked into the kernel. Thats just as stupid as having no drivers in the kernel. It is totaly possible to write device drivers where most of the device specific code is in User Space. And only a small portion of the device driver runs in the kernel. Just enough to make it SAFE. This is the direction that the GGI is/was heading. I say was there because I dont know if they are still with us. Its a real shame, they were doing some awsome work. I know I probably lost points with a bunch of readers by mentioning the GGI, but please think about what I have said, you might find that it makes sense.
  • You are correct that FBDev consists of graphics drivers in the kernel. However, there is no accelerated interface to be found for FBDev. Its as slow as a snail, unless you happen to have a matrox card. When I say fast 2D alpha blts, I mean hardware assisted. Not software read/modify/write style blts. I don't think there are any interfaces in Linux that can do that (other than X in some cases). And your assertion that more card will be supported in the future might not be true. I would LOVE for that to be the case, but Linus really doesn't seem to like the idea, and as long as everyone views X as good enough, there wont be any pressure to add support.

    Your also right about the GGI. What I said was actualy about the KGI, which is "almost" a different project now. I use the GGI in about half my graphics code, the other half uses the SDL. I havn't picked a favorite yet though. :)

    Thanks for the good response.
  • Ask Slashdot: Where can I find pictures of huge, gaping assholes?

    http://cmdrtaco.net/rob.shtml
  • by gmhowell ( 26755 ) <gmhowell@gmail.com> on Thursday June 14, 2001 @08:50AM (#151613) Homepage Journal
    Big dittos to your post. With a caveat.

    I managed to host the rpm database on my RH machine about a year ago. And in trying to upgrade stuff, finally got it so that gcc doesn't work, glibc is screwed up, etc, etc. Rather than rebuild everything, I'm going to take the chance to upgrade and switch distros (to Progeny for those who need to know.)

    I was thinking along your lines (wait for GCC 3.0) but I'd rather not. I backed up all of my tarballs, and I'll wipe everything except /usr/local and /home. Install new distro. Now, when GCC 3.0 rolls around, I might have to reinstall again, but I've backed up the tarballs. Just reinstall them.

    There is another trick: with each subsequent release, there are more and more apps available. Half of the tarballs of stuff I have installed on my RH 6.2 system are included as packages on Progeny (Storm/Corel/Debian/etc)

  • Try using Java apps with a gui. But that's not what you meant, right?

    ----------------------------------------------
  • I found the problem just the other day. I did rebuild tdfx.o from the CVS too, so it is still
    broken. X 4.0.3 does the same thing to me. I just want to get KDE 2.1.1. I have KDE 2.1 and X 4.0.2 now, on a hacked up MDK7.2 with glib2.2. On well I will just wait, or get a new card. BTW
    what is the best 3D card for Linux right now?
  • Two minor points

    (1) Most install scripts seem to default to /usr/local these days, while Debian packages invariably go under /usr, so there's reason to go for purity either way. X is a big exception to this, which is part of the reason why I wait for the .debs; it interacts with so much stuff that is packaged that I don't want to risk dpkg getting confused and exploding. That may be an annoyance, but since I like packages more than X, I consider it a bit of (forceful) friendly advice.

    (2) Rolling your own packages is simple enough.

    That said, I still wish I still prefer the BSD ports system.

  • First, I've gotta say that this was a hilarious, if excessively profane, troll. I'm always amazed at the amount of work some people will put into this kind of stuff.

    The 'paradoxical dependencies' is right on because in several instances, as an example: libstdc++#### depends on libc6####, yet libc6#### depends on libstdc++####

    Not on my system. apt-cache showpkg reveals that libstdc++ is dependent on libc6, but the reverse is not true. I'm not entirely sure what that has to do with XFree, anyway, since AFAICT there aren't any dependencies between X and libstdc++. I'd love to show you the proof, but it's about 5100 lines long, and I don't think the lameness filter will let that pass.

    Unless you have tried to upgrade the XFree86 >4 on your existing debian system and had none of these problems then fuck off.

    What's ">4"? If you mean 4.1, then no, I haven't tried, but I am tracking unstable, which is currently at 4.0.3-4, and the last time I had a problem was, IIRC, during my initial move to 4.0, when there was a broken dependency deep in the system, though I can't remember if it was X-related (whoops, run-on sentence there). In any case, it was promptly fixed, though I'd worked around it by then, anyway. (Come to think of it, I may have been in worse shape then you, screwed libs and all, but I, having used Debian for a whole two weeks, managed to rebuld everything by stripping the system out and replacing everything by hand in one night.)

    And yes, many others have had the same problems whether using the apt-get route, dselect, etc.

    True, and usually they don't what the fsck they're doing, but have decided that since the package system won't conform to their declued view of the world that it must be "broken", and its their job inform everyone of this fact. Some of them are just trolls, too.

  • &gt The wise will wait for GCC 3.0

    Actually the wise will wait for gc 3.0.1 or 3.0.2.

    stein

    Me, I'm still waiting for Woody to come out so I can upgrade from Sid to Potato.
  • you probably mean slink not sid (sid is unstable)

    Well, that would certainly make more sense. :)

  • sorry for the 'hot' persuit. Thought you had problems running g450 at all / were thinking about bying one.

    But, asking detailed questions makes giving insightful answers easier. Lets leave it at that...
  • yes, the fact that you are clueless... if you want support (I'm looking at dualhead, dualrefreshrates, dual resolutions under 4.03 so there are drivers) go to www.matrox.com or especially:
    http://www.matrox.com/mga/media_center/press_rel /2 001/linux_powerdesk.cfm

    install a few files and run this powerdesk for linux and your xfree works like a charm. (ps, you have to cut and paste this link, you can't click).

  • Your entire post has made no sense. You have yet to explain what exactly goes wrong, and why you cant upgrade the standard librarys. If any upgrading was done to your standard librarys those would be the upgrades that are done anyways.

    The only upgrades this X install will do are packages built directly from the X source. Everything else is pulled from the woody dist.

    Yes, thats right, its not going to randomly upgrade packages because it has "paradoxical dependencies", its going to upgrade everything that would have been upgraded if you ran "apt-get upgrade" anyways.

    Please stop spreading FUD, or at least give a reason why these things are bad.

    One more point: maintainers dont control which portion of the libs are installed in the new one and whatnot . . . I dont know how dpkg works internaly, but basicaly it wipes out the old package, and write over *ALL* the old files with files from the new package. Anything else would be non-sensical. To get your old stuff back just re-install the old package. this will wipe out the new package, and install the old one

    Journey
  • khttpd, like almost everything in the kernel, is a module. It's under 'experimental', and, as such, is not normally compiled, let alone loaded. Linus himself, undisputed king of "No! Too bloaty! Take those four instructions out of my kernel!", has said that it's good as a technology demonstrator and benchmark whore. So stop knocking khttpd! It's not a production webserver or anything like that!

    And besides, having optional khttpd is *not* like having the GUI in-kernel with windows. *You* try unloading the GUI from your kernel32.dll.

    -grendel drago
  • Compile time, nothing! What about insmod and rmmod?

    Any module *cannot* bloat the kernel. Unless you count higher make-modules time and longer tarball fetches.

    -grendel drago
  • /sbin/rmmod bloatydriver

    There's your choice. Schmuck.

    -grendel drago
  • I currently run a SuSE 6.3 system with a 2.2.14 patched kernel (from standard 6.3 2.2.13 kernel). I have whatever version of X was included (3 something, probably), and use KDE 1.3.

    I have thought about going to X 4.x, and KDE 2.x - but as I am still shakey about breaking my system (it was a bitch doing the kernel patch - just to get the ZIP drive working, only then breaking the sound, having to install and configure ALSA, etc) - everything runs OK on it right now. I just would like to get some of the extra features.

    Has anybody done anything like this, and what was your experiences? How difficult would such a major upgrade be? Would I have to patch the kernel again (one thing I have wondered is if I could just grab the latest patch for the kernel and apply it against my source, or if you have to do the patches incrementally - or if I would just have to grab the whole source, etc)?

    Or, should I do what I am thinking of doing - scrap it all, get a reasonably late distro, install that and move my data over?

    Any recommendations?

    Worldcom [worldcom.com] - Generation Duh!
  • HW accellerated 3D in Linux and the BSDs is handled by ether the Direct Rendering Infrastructure or Nvidia's proprietary equivilant if using their drivers. The textures, polygon info, etc. are NOT handled by the X protocol. If you're going to slam Linux then at least do it CLUEFULLY. There are bad things that can be credibly said about DRM but then you would have to actually KNOW something now wouldn't you?
  • Nice to see the Radeon support is upgraded, and that the chip is recommended, since I have the All-In-Wonder Pro Radeon. Now I just need to get Gatos working with it.
  • Yeah, 80X50 text, and it sped up scrolling wonderfully.
  • by wiredog ( 43288 ) on Thursday June 14, 2001 @06:01AM (#151630) Journal
    Yes. In about 91 or so. That's when I decided to upgrade from VGA to an accelerated SVGA with, IIRC, 512K of on board memory AND a WordPerfect driver.
  • Nice to see the Radeon support is upgraded, and that the chip is recommended, since I have the All-In-Wonder Pro Radeon.

    Any chance it works with an AMD 761 northbridge? I had a 32MB DDR Radeon working fine on a VA-503+ with XF86 4.0.2, but my attempts at getting the same card working on an M7MIA have been somewhat less than completely successful (read: hasn't worked at all). Others have said it'll work if you shut off acceleration, but what's the point of doing that? I might as well yank out the Radeon and put my Xpert 98 back in if I'm going to do that.

  • All joking aside, the 2.4 series has kernel hooks for XFree86 DRI. At this point it is pretty new and only a handful of cards are supported, but it looks promising for getting some good fast 3D graphics support under Linux. And don't forget, NVIDIA's 3D drivers for linux require a kernel module to function, probably for speed reasons.

    While it is true that graphics is not the job of the kernel, especially when running a server (WinNT's stability issues are a shining example of why this is bad), it is a nice option to have for Linux systems used as graphics workstations or for gaming, applications where you want to be able to squeeze as much performance as possible out of your video adapter. And for server class systems, where you don't need or want advanced graphics support in your kernel, it is a simple matter not to include those extensions at compile time.

  • X isn't really that big (see the recent Slashdot) article. It's mainly the widget toolkits and slow desktop environments *cough*KDE/GNOME*cough*.

    X itself it quite small and fast (something like 1-2 MB, IIRC).
    ------

  • but then again, quake 3 isnt everything.

    i want to play those other nice games aswell, and i dont really feel that lokis effort are worth its money anyway ( because they port games that i played 1 year ago.. )..

    and i dont consider p3-600 as a low-end machine.. :)

    framerate isnt the thing im really concerned about, i dont feel that i get less framerate in q1 in linux than in windows anyway. its just the fact that the games are mostly unsupported AND has a tendency to crash far more often.
  • i agree, x sucks at gaming, its the truth, i had the same problem with QUAKE damnit, both quake 1 and 2 ( yes i know it not x ) .

    on several machines, just please, let go of your dreams on linux as a gaming machine. i say - dualboot if you need linux for something else than your server needs.
  • Actually, KDE being slow has less to do with X and more to do with linking, loading, and C++ virtual function use.

    Check out:
    http://www.suse.de/~bastian/Export/linking.txt

    Besides, moving XFree into the kernel would be of dubious value at best. Moving certain critical _parts_ of XFree into the kernel might be a win, like the graphics drivers. After all, why the heck should a user space application like XFree be doing PCI management?
  • Try running XF4.0.1 on LinuxPPC with the Rage Pro driver that ships in the stable build.

    Its a pig.

    I imagine with the driver work and the inclusion of DRI, that 4.1.0 will be a massive improvement for Linux/PPC users.
  • Obviously not (in the past 5 years or so) because in that case most people turn off features until everything runs fast enough. But i bet even nowadays there are people sitting in front of their PC saying: "Wow, look at all the stuff i can do now and the cool GUI, two years ago i wouldn't have thought it possible without enduring agonizing responsetimes".

    Yup, i admit it, i think transparent window moves look nice, i would live without them, if they were slow or jerky, but now that i can have em i switch those features on. So one could say that most of the speedgain goes to (maybe unnecessary) eyecandy, but i don't think that is a bad thing.
  • I know you're joking, but I have to point out some flaws in our information. First, DirectX isn't in the kernel. Just like most of Windows, it is contained in a userspace library. The only thing in the kernel is a channel through the HAL (on Win2K) that allows the DirectX libraries access to the hardware.
  • 1) X runs on UNIX. Unicies are almost always server-oriented systems, and tend to have very short thread quantums. For example, the quantum on Linux 2.4 is 50ms (down from 100+ on 2.2). That means that if a process sends out a request near the beginning of its timeslice, it will be a minimum of 50ms before X is sheduled again. This is why renicing X has such a good effect, because it allows X to be sheduled ahead of other processes. OSs that use shorter timeslices (10ms on NT, 3ms on BeOS, 4ms on QNX) show much improved access times, even when the GUI is in a userspace server (as in BeOS or QNX)
    2) It's badly designed. To tell the truth, I can't rationalize X's design. Wheras other OSes like BeOS use special-purpose messaging channels to communicate with the window server, X uses the much more general (and much slower) UNIX domain sockets. Secondly, when GUIs like Photon (on QNX) implement all the features of X plus more in less than a meg, one has to fault elements of X's design, elements that have nothing to do with either the versatility of transparent networking, nor the stability of a usermode server.
  • Actually, even games are moving farther and farther from the hardware. The main reason is because today's hardware is so complex, they have to hide behind libraries and drivers (OpenGL, DirectX, OpenAL, etc) anyway, so properly protecting everything isn't a problem at all. For example, there is much talk that new graphics cards will not export straight framebuffers without a performance hit, since it messes with their internal workings. Again, its not that X is in userspace that's the problem (sometimes I don't think some /. understand the actual differences between userspace and kernel space...) but that it is poorly designed.
  • To tell the truth, I don't want to pay for stuff I don't use. Where's the famed UNIX "choice?"
  • Oh god. He pointed out X sucked! Blasphemer! Keep it coming, baby, got plenty of karma to spare ;)
  • Yuck. Factual errors all over the place...

    This is simply not true. Both your 2.2 and 2.4 numbers are dead wrong.
    >>>>>>>>>>>..
    I don't think so. The length of a jiffy (the timer interrupt) in Linux is 10ms. However, the quantum is 50ms. Most processes are not preempted within the 10ms jiffy.

    X is in userpace.
    >>>>
    That's my point. QNX and BeOS both run in userspace, just like X, and run a good deal faster.
    The design is about 20 years old, and still going strong.
    >>>>
    It's still alive... But so is Strom Thurmond.

    Luckily enough they tought of X extensions. Oh wait, X extensions are bad right?
    >>>>>>>>>>>>
    Yes, extensions are by definition bad when they are used to implement core functionality. OpenGL extensions suck for the same reason X extensions suck: they are not transparent to applications. All apps should be AA enabled automatically (user-configurable, of course!) The fact they are not is an inherent weakness in the extension mechanism. Sorry to say this, but MS has the right idea. Take a look at DirectX for an API that allows old apps to automatically take advantage of new advances.

    Local sockets are really fast
    >>>>>>>>>>>
    Umm, the BeOS messaging system can shunt 90,000 messages per second around the system (on a PII 300). Nothing on UNIX is anywhere *near* that number. Even QNX can't do above 40K.

    Try TinyX. Your arguments, while true to some extend, are really not convincing enough to call X "badly designed".
    >>>>>>>>>
    Does TinyX have all the featues of QNX Photon? Hell, Xfree86 doesn't have all the features of Photon!

    As for BeOS, you're comments are irrelevent. I mentioned it for technological comparision, not political debate. Though I too support OSS BeOS, it is not an issue in this thread. Not everything I post is a Linux-sux BeOS rocks rant.
  • If there is no other process to run (with a higher priority) of course the kernel will let the current process run its full timeslice. However, any process that needs attention and has a higher priority than the current running one preempts it and gets to run (within 10-20ms).
    >>>>>>>
    True. However, that requires processes to have a higher priority. However, on a normal system, Linux doesn't automatically manage these priorities. While "goodness" does have some effect, the Linux sheduler doesn't use tricks like the Windows sheduler does to make sure that GUI apps get fast response times. If you look at the case study of Win2K in Tannenbaum's new book, you'll see that Win2K does all sorts of priority mucking to make sure that GUI apps have a higher ability to preempt than other apps. While this might be bad for a server, or a generalized system, it certainly does wonders for GUI response.

    "run a good deal faster", and what runs a good deal faster? The BeOS UI is certainly much slower on my box these days, but I suspect that has something to do with the AMD-challenged optimizations in the Be kernel.
    >>>>>>>.
    BeOS doesn't properly support the mtrrs in AMD chips.

    Haven't tried QNX recently.
    >>>>>>
    You should. Not only does it have the nicest fonts I've ever seen, but it is fast as all hell.

    Point is that with properly prioritized processes you can make Linux just as responsive as for example BeOS.
    >>>>>>>.
    On a desktop or workstation OS, this shouldn't be necessary. The OS should manage that. Besides, I've been running X at -20 as long as I can remember, and while it improves the speed, it still doesn't work as well as Win2K or BeOS.

    The advantage BeOS has is that it does this automatically for you, it's a single user OS, with no security. (I'll take multi-user Linux over it anyday though!)
    >>>>>>>>
    Why? On a workstation, what's the point?
    t they are not transparent to applications

    What do you mean, not transparent?
    >>>>>>>>
    For example, in DirectX, an app uses whatever features the API has available. If the features are not implemented in hardware, they are emulated. (Or, in the case of some 3D features, just not implemented. In these cases, however, you get fine-grained information about the exact capabilities of hardware, so it is quite easy to enable turning on and turning off of features.) In either case, when hardware that supports that feature becomes available, the software automatically takes advantage of it, no recompilation, no patches, no nothing. Thus, if X had this design, all apps would automatically take advantage of whatever features were available. AA text could be done without requiring application support. Different rendering back-ends could be put in without messing with apps. In general, lots of stuff that X should do automagically, but can't. While this is more of a problem for OpenGL (where features are introduced monthly) X isn't immune to it, as evidenced by the silliness of Render. While DirectX in reality has deviated somewhat from the ideal of feature transparency, it still does a hell of a better job than any extension mechanisms. This is evidenced by the fact that developers sternly told MS not to make DirectX extendible.

    Where are you getting these numbers from? And what constitutes sending a message?
    >>>>>>
    A simple program shunting data from one app to another. The messages were various sizes, and used whatever IPC mechanism was native on the OS (ports in BeOS, send, recieve in QNX). With 32 byte messages (the size of an X packet) BeOS hit 90,000 messages per second on a PII. QNX was in the 40s. With big messages (10K and up) BeOS could move data at memcpy speeds approaching 400MB/sec.

  • you probably mean slink not sid (sid is unstable) :)
  • Don't forget the XVideo/Render/RandR code that the developers are working on. New releases often contain substantial upgrades for video drivers which can improve both speed and stability.

  • He's also free to use the XFree86 4.1.0 packages that are currently be tested. They are available through the package maintainers page @ http://people.debian.org/~branden [debian.org]. They may cause problems, or they may not
  • You can obtain the bleeding edge kernel modules from the ati project at linuxvideo.org.
    http://www.linuxvideo.org/gatos/
  • by nlabadie ( 64769 ) on Thursday June 14, 2001 @09:21AM (#151651)
    One of the major problems I had running XFree86 on a laptop was having to switch between a port replicator (aka docking station) and using the laptop's display. For those of you that don't know, a port replicator lets you use a standard monitor, keyboard, mouse, etc. Switching between various XF86Config files got to be a royal pain in the arse.

    So... those with laptops give this option a try in XF86Config:
    Option "UseBIOSDisplay"

    It lets you switch between monitors without changing the config file. Haven't had a problem yet.
  • The kernel closed-source drivers will not work with FreeBSD. They are Linux-only. :(
  • Thanks. I checked the websites out and noticed a beta version for download. Check out the news section here: http://nvidia.netexplorer.org/
  • I am staying on XFree86 v3.3.6 + UTAH until either nVidia opens their drivers or I get another card. Unless nVidia changes their stance I will NOT be purchasing an nVidia card in my next machine.

    BTW, I run FreeBSD so not even the closed source drivers are an option. :(

    Does anyone know if the XFree86 drivers support minimal hardware-accelerated 3D graphics on a TNT2 Ultra? Maybe close to the UTAH drivers?
  • I upgraded to 4.0.3 from the offficial binary distribution, thus breaking all rpm dependencies in my whole system, showing how little I know about rpm.

    That was the day before they released 4.1.0.
  • i was under the impression that none of the bsd's currently had dri support. please tell me i'm wrong because i'd love to use it on my spare machine with a voodoo 3.

    B1ood

  • I had a 3dfx Voodoo 3 3000 a while back, and had pretty much the same problems you mentioned with blender. Not only blender, but also AC3D and - here's the kicker - the problems were present on the Windows versions as well.

    That in mind, it might be a card issue more than a driver issue.

    As far as new cards, I have a Nvidia Geforce MX that's fully supported. The drivers are binary, and once I got the newest ones from NVidia, I had no problems with them.

    If you want open-source only drivers, I believe that ATI's are - though I don't know how well they're supported.

  • >once you try debian, you don't wanna go back to >any other distro. little jump in comments like this are worthless, but.... I tried debian, used it for 6 months or so, and then gave it the boot.
  • I have great fun with Debian - 56k dial-up at home, so for larger updates (X, Gnome, KDE...) I grab packages at work, burn then, and take them home with me. Work==NT, so I have to spend time physically parsing the Debian unstable tree, following new dependencies as they pop up and then backtracking to where I was originally. I'm invariable missing something by the time I get home. This makes the times I can use apt even sweeter... Some web-based recursive dependency checker would be ideal - "Want X4.1.0? Then you'll need foo and bar. And bar requires cheech, and cheech requires version 2.6 of chong." On second thoughts, that takes all the fun out of it...
  • by PianoMan8 ( 99085 ) on Thursday June 14, 2001 @07:25AM (#151666) Homepage Journal
    Do you have the Creative Banshee? If so, that's your problem. Creative used underspec'd ram on some of thier cards. I talked with Daryll Strouse(sp?) about this at ALS last year (I happen to have these cards) and he was willing to put in an option "SlowRam" to use less aggressive timings on these cards, which would work. (The official TDFX drivers, may they rest in peace, had a similar patch applied, but it slowed down access for ALL banshee's with sgram)

    Solution: contact me (clemej@pop3free.comCANNEDMEAT), my slashdot info is very outdated.), and I'll send you the patch from the X3 tree, you can find a way to apply it to X4. and then compile your own X. My attempts to make a SlowRam patch seemed straight forward enough, but never worked. Or, spend the 30$ and get a voodoo3. better performance, much more stable. I'm running X4.1.0 now on my Voodoo3 2000 PCI with DRI, and it runs great. Beats my old creative AGP banshee to a bloody pulp.

    Staying with a buggy banshee means you're gonna have to recompile. A lot.

    If that isn't your problem, will, then the best I can say is, IWFM.

    pm.


    - --
  • Rob is no hacker.
    Download source, compile yourself.
    Are all so lazy?

    --

  • No, I started in the middle, and then read the post letter by letter first to the right, and then to the left. That way you get both sides of the story, if you see what I mean.

    ---
  • Some of us have to use the FB Xserver. Since framebuffer support is already less than fast, any speed improvments are quite welcome.
  • by stilwebm ( 129567 ) on Thursday June 14, 2001 @06:45AM (#151676)
    That is fairly common when moving to new xservers on new hardware. It is usually the result of timing issues afaik. This type of bug is usually eliminated as more user feedback and testing help the developers with optimal timeing and acceleration code. Speaking of acceleration code, sometimes turning acceleration off eliminates the snow at the cost of speed. In any case, 4.1.1 will probably be much less snowy for you.
  • I'm on an AMD board too. UT isn't rock solid here either, but I thought that was just me playing around with a few things, as it was being fine until I started piling on the clever stuff. For some reason its a lot more stable if I run it at my X Desktop resolution, which shouldn't make any difference. As you've got a custom UT user, try setting its normal X desktop to the same as you want for UT.

    I agree that Quake 3 Arena is sweet on Linux; even if the benchmarks say its the same as Windows, subjectively it felt much nicer for some reason.

    The main thing that impressed me with UT though was that compressed textures work, which makes a huge difference. Also speed generally (because the textures fit in memory better with my 32Mb Geforce) is a vast improvement over the stutter factory that is running UT over 640 with medium res textures under Windows.
  • Excluding graphic work/rendering and games, obviously, has anyone ever sat at their PC and said "Oh my, the window redraw rate in this is slowing me down..."?

    This is something thats measured in hundredths of seconds for Pete's sake...

  • Yes. In about 91 or so...
    Thats what I thought. I remember having some bad days with an Apple ][, but since then...
  • If you were having problems with past versions, this [upgrade] may be your best bet. Still, if you're at XFree86 3.3.6 and you're using an obscure graphics card, I'd suggest doing a little checking to see if your video card is supported. Some cards are still supported best in version 3.3.x. This is, quite obviously, one of the biggest concerns of most users. Although, as far as I know, no cards' support was broken in the upgrade from 4.0.3 to 4.1.0.

    This is the major point for me. Especially since I sometimes throw together frankenboxen with a wide variety of obscure parts. [I obviously need the education]

    Still looks very promising. Tbe proverbial step in the right direction.

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • Excluding graphic work/rendering and games, obviously, has anyone ever sat at their PC and said "Oh my, the window redraw rate in this is slowing me down..."?

    Yes. I once tried to run an animated double-buffered Swing-based Java app in a maximized window remotely over a 56 kbps modem... The window redraw rate was too slow for a usable slide show, let alone smooth animation.

  • I am Voodoo3 owner. New XFree86 works faster, and more stable. 4.0.3 was bad - no Xv support, troubles with console and framebuffer. However - OpenGL in current CVS (and 4.1.0) is still broken. In all tests or benchmarks you just write about Quake or UT, never about less popular stuff. There are serious problems with Blender for example (but works much better than before!). When I use software renderer - everytihing is OK, but slow. When I turn on hardware accelerated 3D - some object are not drawed, blinking, etc... you won't notice it in pure OpenGL apps, but try gtkglarea, SDL/paragui or anything else with 2D widgets...
    Should I change video card? But what should I choose? Nvidia has binary only drivers, tdfx dirvers are buggy... is it any other way?
  • I'd be happy if someone got it to compile for Solaris and packaged it up. It seems that a key part, an include file if I remember correctly - prevents it from compiling the server itself. It's under the sun stuff. I don't have the information handy offhand, but I could reproduce it with differing versions of XFree86 as well as the X.org version(s). I know it's possible... I just want Xinerama and Render support for Solaris! Is that so wrong? :)
  • So I downloaded the 4.1.0 source, untar it, do the "make World" bit... And NO ERRORS! It just worked. I had to use Solaris' "make" as the GNU version bombed with "illegal option -w" but whatever little glitch was there has been fixed!

    I can't say how stoked I am to try the render, xinerama and truetype support finally, under Solaris...

    Props to the XFree team!

  • 1) First, this is great news for Radeon users. IIRC, though, when I tried the Radeon DRI from Sourceforge, I had to recompile my kernel without its DRI modules and use their modified X's. I already know that I'm going to have to download the source tarball to use the DRI in XFree86 4.1.0. Do I have to recompile my kernel too? (Not that it's hard - just wondering.)

    2) There was a bug in DRI that would freeze up the entire system. I could always reproduce it by playing a few minutes of Quake III. Does anybody know the status of this?

    3) (Stupid luser question) Where can I find info on how to do AA fonts on XFree86?

    Thanks in advance for any good answers.
  • by grammar fascist ( 239789 ) on Thursday June 14, 2001 @01:50PM (#151711) Homepage
    SELF-REPLY ALERT!

    Do I have to recompile my kernel too?

    No, idiot. Just find and compile radeon.o in the XFree86 source tree. Copy radeon.o (which is version 1.1.0) into /lib/modules/2.4.x/kernel/drivers/char/drm over the top of the existing one (which is version 1.0.0).

    That worked. Greetz, grats, and thanx to all in the XFree86 team and their DRI buddies that made this work so well. Quake III is beautiful on my Radeon.

    By the way, if you dual-boot, you can use a Win32 install of Quake III to play on Linux. Download the latest Linux point release from www.quake3arena.com [quake3arena.com]. Change directories to one directory above your "Quake III Arena" directory on your Windows partition. Change its name to "quake3". Untar the point release. Change the directory name back to "Quake III Arena". Run quake3.x86. Isn't that spiffy?

It is easier to write an incorrect program than understand a correct one.

Working...