Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Software

NVidia releasing OpenGL ICD by End of Year 116

ttyRazor writes "ga-source.com is reporting that at Comdex they were told by NVidia that they will be releasing an OpenGL ICD for Linux for all their current products by the end of the year. Woohoo! Quake 3 on my TNT in Linux! One less reason to dual boot. " Mmmm...prettier graphics. I'll give thanks for that.
This discussion has been archived. No new comments can be posted.

NVidia releasing OpenGL ICD by End of Year

Comments Filter:
  • I'm very pleased to see companies adopt this kind of stance. I'm only buying hardware now from companies that have a positive attitude to linux, and letting companies know this. I'm now strongly tempted to buy an NVidia card to replace my G200.

    Please let hardware (& software) companies know that there is money to be made in supporting Linux. I'm very gald to see that Creative Labs have caught on to this. With NVidia, 3DFX & Creative to set good examples, hopefully the rest will follow.
  • TNTs are crap compared to nVidia's new GeForce.

    GeForce are crap compared to the Video systems in the SGI Machines.

    Lets just face it. If you want good 3D performance, you go for an SGI machine running IRIX or SGI-Linux.

    If you want realistically priced 3D performance you go for a Intel-compatable machine with a G200 / Voodoo3 / TNT.

    I'm just looking forward to GeForce cards being usable in Linux. Then I'll be happy.
  • I was under the impression that glx could also write directly to video memory, bypassing the X server. That's why you can't take screenshots of it using a standard X screen grabber.

    It cooperates with the X server.

    I was able to make nice screenshots [freebsd.org] using good old xv on my FreeBSD system. Pulsar shows some fps (K6-300).

  • by Anonymous Coward
    Actuall, 3DFX just released an alpha version of their DRI system (with FULL source). Check out http://www.3dfxgamers.com/view.asp?IOI D=1024 [3dfxgamers.com].

    Watching Q3Test at 30fps in a window on a V3 is pretty impressive!

  • I'll believe it when I see it. This isn't the first time companies have made great promises and then failed to deliver upon them. Don't get me wrong, if they do, it's fantastic, but it'd be much cooler if they'd just deliver it and let that serve as the annoucement.
  • I guess I just need to find a clue somewhere...but I do not care one bit if nVidia releases all source, some source, or no source for any of thier drivers...I want the drivers, so I can use the hardware to do in Linux what I can do in windows ( read : games ).

    Sorry if that ruffles someones feathers...but I would say the vast majority of people do not really care about the source, but care about it actually working. I have no problems with them writing their own drivers for their hardware.

    It is in the companies best interest to write good drivers, and as fast as possible, to support as much as possible. Personally I hope I can fly on Quake 3 on my nice TNT2...that is my bottom line, not having source I will never look it.

  • Ha ha funny. When do you run two OpenGL apps at once? No-one in their right mind does. (Unless you like things going at half the fps.) Second, yes, locking could be a problem when using multiple threads, but A. Mutliple threads aren't used often in Linux, and B. BeOS has showing that very efficient locking can be done. And under something like BeOS, where multiple threads using GL might be a problem, in the event that it DID lock, you could just restart the OpenGL server. So A. You normally don't have a problem with multiple OpenGL clients, or B. You are running an OS that has multiple GL clients, but can manage the locking correctly. And I wouldn't critisize Windows until one particualr OS, which shall remain nameless, can outperform it in OpenGL.
  • I always had the feeling that the only reason they built the VisWS is because MS ordered them to. All their value was in their MIPS Irix (although not in Irix itself) based machines and software. Building an x86/NT based machine put them in direct competition with Intergraph and others. As several people mentioned, even with their custom bus arch, they really weren't much faster than other x86 Workstations and customers couldn't justify the cost/performance of the machines.
  • The GLX thet the TNT2 and TNT uses only works at 15-bit and 16-bit colour. Any other depth and the reverts back to software.

    Yes. This was in the release notes. 32bit color would be nice, I agree wholeheartedly.

    The driver didn't access all the features of the chipset. There was no advantage in having a TNT2 with 32Mb and a TNT with 16Mb. (Unless you had a ridiculously large Workspace with 32-bit colour, but that has nothing to do with the GLX code)

    I have a secret for you: no one cares. 32 megs is on that card 'cos they wanted to have a bigger number on there, but there's no excuse for it. The only case in which you would care is if you literally have 32 megs of textures on the screen at once. Otherwise you can page on and off the card very efficiently. Speaking from experience here, about 4 megs of texture memory is more than plenty.

    Basically, If you put a TNT-2 next to a comparative Matrox card (G200?) next to a comparative 3Dfx card (Voodoo2?), the TNT-2 would be only marginally better than a S3 Virge. :-(

    You obviously haven't played Q3A on a TNT2 in Linux.

    Again, speaking from experience, it's not bad. The cost is a weird one, since you're paying for data sent to the card, so it's kind of like having a slower computer. But it's no where near a Virge.

    And this is a problem because of current GLX architecture, i.e. we're making the wrong trade-off now -- GL apps work beautifully in a network transparent way (you can display them remotely) but at a big speed hit. DRI will fix that. But all nVidia's got to do is reimplement their driver to use the DRI version of GLX ... doesn't seem like that would be too hard to do given the code they've released. Granted, I've only read the design docs for DRI, not looked at code. So I don't really know how bad it'll end up being.

    For that matter, I should really look more closely at the code nVidia released....
  • I apologize. Stupid me for not keeping up.

    -Jay Laney, who is downloading it now.
  • Two points...

    First off, I didn't see any mention of X in the article, unless there's more to it than the single paragraph that I found. If this does refer to an nVidia X server, then it's really nothing new. If it refers to something else, I'd like a bit more information.

    Second, the guys over at XFree said about 4 to 6 weeks between snapshots, and the last snapshot was around the end of August. Anybody know what's up over there? Is nVidia expecting 4.0 to be released by the end of the year for their "new ICD"?
  • Da! How?!?!? I can't get Q3 to run under my TNT2 chipset... I keep on bombing out with Segmentation Faults...
  • I had this mpeg that mpegtv would display fine, but the sound was all choppy. SMPEG played it fine. Plus, SMPEG wont have that anoyying shareware dialog box when you start up the gui version (I think, I never got the smpeg gui to compile, but I guess that is just my system).
  • Locking is addressed in one of the p.i. papers, can't judge their scheme at present however.
  • it runs fine on my p2 350/160mb/tnt2 , just download the x server from nvidia (i would get the one with glx built into it, you'll get more fps with it) and then configure x with modes 640x480 and what other modes you use, then start quake3... you have to have 640x480 set because it initially starts up at that resolution so if you don't have it it will not work, and for whatever resolution you want to play in you'll need it set in your x config file. (i get around 25fps in 800x600 texture quality all the way up)
  • Linux just plain outperforms, outscales, and blows SGI IRIX machines out of the water.

    I love to emphasis that FreeBSD was used for rendering The Matrix. But let's face it, in both films mentioned, the free operating systems just delivered raw muscle, not the brains (ie acted as rendering farms).

    The higher level modeling and control still seems to be a job for SGIs.

    Present stuff gets nice and already is sufficient for certain modelling needs, but we are not state of the art.

  • Since this will be an Open Source ICD (?) hopefully we will be able to port this over to BeOS, OS/2 etc? Lets get the ports under way!

    So what did we learn?
    Basically the C keyword auto is useless.
  • I'm using everything straight from CVS, like the FAQ says, and it works great with Quake 3 demo test on my G200. 800x600/vertex, decent frame rate.


    Interested in XFMail? New XFMail home page [slappy.org].
  • use -rMesa_3_2_dev (or something similar, can't remember off hand and I'm at work and my checkout tree is at home). Basicly, get Mesa 3.2.
  • The driver that has been offered on the nvidia site for some time now is based on a pre august snapshot of the openprojects glx. The changes are mostly included in the present version. So it is more or less old stuff now.

    The nvidia specific part has not been touched except for some adjustment to later XFree86 changes regarding the card IDs. But the other stuff (and of course the Matrox specific things) have been improved.

    XFree86 will have direct rendering (DRI) and indirect rendering (glx). Here I expect the hardware drivers of the free glx to be integrated and the glx protocol stuff to be replaced by the SGI implementation. But who knows for sure.

  • The driver didn't access all the features of the chipset. There was no advantage in having a TNT2 with 32Mb and a TNT with 16Mb. (Unless you had a ridiculously large Workspace with 32-bit colour, but that has nothing to do with the GLX code)

    I have a secret for you: no one cares. 32 megs is on that card 'cos they wanted to have a bigger number on there, but there's no excuse for it. The only case in which you would care is if you literally have 32 megs of textures on the screen at once. Otherwise you can page on and off the card very efficiently. Speaking from experience here, about 4 megs of texture memory is more than plenty.
    Actually, 32Mb of texture RAM *does* make a big difference, particularly with the newer games. 8-10 months ago, when 16 Mb cards were the norm and the first 32 Mb cards came out, numerous benchmarks showed the bigger RAM helped significantly when moving to 32 bit display & textures.

    Yes of course you can swap textures in & out, but it's hardly free - bus bandwidth is one of the most stressed resources in today's games (hence the popularity of AGP). And it can slow things down dramatically. Q3Test takes quite a decent performance hit on my 16 MB TNT when you turn on 32 bit textures (doubling the texture RAM needed). AGP texturing can help, but is still slower than more local RAM.

    Finally, don't forget this RAM is usually also used for the frame buffer and z buffer - at 1280x1024x32 for each you've already used up over 10 MB just for those.

    Namarrgon
  • Yea, but multiple windows isn't multiple clients. Do you run 3DMax and Truespace at the same time? 3DMax has a bunch of windows, but it is still one client. I don't play games, I do 3D animation, and any app that I use that needs the accelerated performance is also too heavy to have multiple ones running at same time.
  • Damn AC. Why not look at the second link and see that SPEC published the results of Lightwave, DRV, etc on those cards? Those are quite valid benchmarks.
  • What is wrong with a binary only driver? XF4 made binary drivers possible just so companies would port their drivers to the Linux platform. With all the whining you "I would rather have software rendered 3D than have a binary only driver" people, nVidia ought to just not bring drivers to Linux. People that fact that HW acceleration is finally coming to Linux is a good thing. One major hurdle keeping people from using Linux is the lack of decent drivers. If companies could make binary-only drivers for Linux, then we would see decent support for devices instead of some hack that "almost works now" and is in version .01.
  • ATM all one can do with geforce is vga16, because it dosn't seem to VESA compliant!! I've heard rumors, that Xfree 3.9.17 will have support for it, but there no notice when it will come out... Sure, last Xfree snapshot [xfree86.org] was just 31. August, and they proomised to do a snapshot every 6 weeks and that is already due... Maybe I'm too impatient, but owning the fastest card in the block and only beeing able to use it fully on windows...
  • I'd rather not have a binary-only driver that plugs into the X server that runs as root.

  • This is where i found mine. compiles well and works well. If can't bother building the server, i can send you my binary http://www.s2.org/~jpaana/nv/gefo rce-3.3-patch.gz [s2.org]
  • Look at the "3dfx opens up Glide" article again. They opened up a very small portion of the Glide API. Not enough for people to do more than write an interface for Glide. I'd rather take a fully closed source setup to start with and have it opened later than these viral partial disclosures that 3dfx does. What 3dfx does gives them little to no incentive to open up their entire API.

    Additionally, 3dfx's most recent products have been, how shall we say.....LACKLUSTER, compared to just about every other product on the market. Even their VSA-100 is "more of same". And I'm bitterly disappointed in them for it.

    Even simple 32-bit color. They're going to be adding support for it, finally, over TWO YEARS after everyone else.

    The only REAL choices you have is which performance-sapping features you want to use. T-Buffer for motion blur, or FSAA. Using them both on any card, save the $600+ V5-6000, and your're going to get a slide-show.

    Thanks for your response. "I" think you happen to be wrong though.


    Chas - The one, the only.
    THANK GOD!!!

  • Pretty Colors *sits dazed and confuzzled*

  • What about other hardware accelerated features such as filtering (bilinear + zooming) for my good old RIVA 128? I'm a little tired of having to watch an MPEG video in a small window, and I really do not wish to boot to windows for trivial things like that. Or is this limited to the driver in XFree86?
  • nVidia's starting to really deliver for the gamers and the Linux community. Hope they keep it up and don't become another 3dfx.


    Chas - The one, the only.
    THANK GOD!!!

  • Actually, ICD stands for "Installable Client Driver" and is a driver which links itself to Windows' WGL library. Therefore, nVidia isn't actually working on a Linux ICD but a full Linux OpenGL implementation.

    Unless, of course, they plan to port WGL to Linux....
  • by mvw ( 2916 )
    I am not sure what to expect, I hope not some obfuscated stuff again. <Sigh> so far they tried to be open and restrictive at the same time.

    Have not read about this on glx-dev or xfree86-dev yet either.

  • I recently got a Matrox G400 to replace my old G200, since Matrox released full specifications, and the GLX project appeared to be going quite well. I was hoping that Quake2 would be more playable and I'd get >3fps in Q3Arena when it's released.

    Instantly I was impressed with the 2D performance (everything just feels faster), but I can't get the GLX stuff [openprojects.net] to even compile, never mind run. It complained of a missing api1.c file, even though I think I'm sitting with current CVS updates from both Mesa and GLX.

    Is it ever going to be released in binary form, working properly? Will it give decent Quaking performance? Or am I going to regret buying from Matrox when I could have got a TNT2 with full, working drivers, for less than I paid for my current card?

  • I suppose they're talking about a GLX driver, since a MesaGL driver wouldn't make any sense with an integrated 2D/3D chip. I wonder if their announcement of Linux support is a result of 3DFX's announcement of the new voodoo4/5, which will obviously have kickass Linux support. :)
  • This article actually leaves me with more questions than answers. Are they actually developing their own OpenGL implementation for linux? I don't think they're really creating a ICD for Linux since isn't that a windows term? I hope they release the docs this time.. instead of trying for the obfuscated C contest. Fast quake would be nice.
  • I know this is off topic, but If you want to watch fast accurate mpegs in Linux fullscreen then grab a copy of Loki's SMPEG sdl based player. I performs a lot better than xanim. Ive tested it with the videos Goodtimes and Buddy Holly of an old win95 cd and it ran fullscreen at full framerate with no glitches whatsoever.

    http://www.lokisoftware.com
  • NVIDIA has a history of supporting things before they become "necessary", such as T&L, 32-bit rendering, Stencil Buffer, etc. Linux is yet another thing that hasn't really gone "mainstream" yet, but when it does 1) it will partly be because of developer support like this and 2) NVIDIA will have a much more mature ICD than other devs (*ahem*3dfx*ahem*).


    Pablo Nevares, "the freshmaker".
  • by rogerbo ( 74443 ) on Thursday November 25, 1999 @01:26PM (#1504417)
    I think I know why Nvidia is doing this and it ain't nothing to do with games or wanting to be nice to the open source community.

    Nvidia and SGI are scheming behind the curtains to create NT killer 3d workstations that are Intel Linux based and will have either Quadro's or most likely some kind of multi-pipe (2-4 quadro's) in parallel and custom bus architecture (like the current SGI vis workstations).

    Cue a release of Maya for Linux soon (it's done they're just waiting for the linux 3d hardware support to catch up...)

    A broad release to the linux community gets their driver's throughly beta tested before the release of their custom boxes probably about march next year.

    Unfortunately then I don't think they will opensource the drivers. It will probably be an open source resource manager (basic interface) and binary only glx module for XF86 4.0. If I understand the XF86 4.0 architecture correctly it's possible to have binary only modules that link into the X server and well at least we don't have the problem of kernel modules compiled for wrong kernel versions anymore.

    Then once SGI get's a few more features in Linux (Raw i/o, XFS, realtime uncompressed video streaming) look for come seriously cool linux based video editing/compositing systems....

    The next year will be interesting....
  • It would be a shame if it didn't integrate well with XFree86 4.0; that would mean that any extras, whether DGA or other X extensions, would have to be developed and maintained by them.

    The consideration that XFree86 4.0 is to be "more modular" will either encourage production of proprietary modules, or downright discourage it, if there need to be some reasonably intimate links between modules. My hope is on discourage.

    Hopefully RAM prices will come back down; if XFree86/OpenGL support for some not-too-expensive 32MB cards comes along, I might consider one in the new year some time...

  • Does any one Know where I can get the libMesaVoodooGL version that came with the old quake3 test? I don't want to have to download the whole thing just for that one file.

    My system majorly lags using the MesaGL driver.
    Thanks
  • .. linux is really becoming __main stream__

    get it :)

    /jarek
  • This isn't really new. I've been using it for the last 6 months. There is a team working on accelerated OpenGL (GLX) drivers for a long time. The drivers can be downloaded from "http://glx.on.openprojects.net". However, you must keep in mind that these drivers are still under strong development and still have bugs (alfa/beta). I've been using it with Quake (Q2 and Q3test) and with FlightGear using a RIVA TNT. The drivers work just fine. The supported chipsets are: Matrox G200 and G400 NVidea RIVA 128, RIVA TNT and RIVA TNT2 ATI (Recent cards) You have to download the drivers, and compile it toghether with Mesa3D. TIPS: When you "configure" the compilation flags, you shouldn't forget to select the right chipset: ex: --with-chipset=tnt You will have to compile a new OpenGL DLL "libGL.so.1.0" and a new XFree86 module "glx.so". The DLL must be moved to "/usr/X11R6/lib" and the XFree86 module, should be moved to "/usr/X11R6/lib/modules". At the end, you will also have to configure the new module into your "/etc/X11/XF86Config" file. In case your X server doesn't support loadable modules, then you must upgrade to a new version. In my system, the "glx.so" module compiles OK, but is missing one object file (asm386.o). This way, I allways copy that file from Mesa-3D and append it to "glx.so". fjp
  • Just wanted to add to my own comment (can you go blind from that?)

    I suspect SGI will release an Intel based 3D workstation that is SOLELY Linux based, i.e. you will NOT be able to run NT on it. The reason I suspect this is that SGI was severely hampered in the design of their Visual Workstations because they needed Microsoft to support them in NT. The VisWS was delayed for almost 6 months while they waited for NT5 and when that never came out they instead got a special patch to NT4.

    SGI got burned badly on the VisWS by Microsoft and they wouldn't want to repeat that or just to put together a COMPAQ style generic intel workstation and slap a badge on it.

    With Linux they can cut their own custom cut of the kernel and their own OpenGL drivers (with Nvidia) and be as funky as they wanna be with switch based busless architecture (OCTANE style) and multipipe rendering.

    Hopefully their modifications would be included into the official kernel but if not I'm sure SGI is capable of maintaing their own stream and keeping it parallel to the official sources for a while.

    Just my predictions.... we'll see if they come true.
  • I might be wrong, but A) he does not say that 3DFX has not announced support for Linux, he just says that their ICD on Windows is not as mature (which I agree with, only recently have they begun to use an ICD, before they used MiniGL drivers). B) Just because they announce it first does not mean they'll release it first.

    Lucky for me, I'm taking a wait and see approach, since one of my boxes has a NVIDIA card, whilst the other has a 3DFX card.

    -Jay Laney

  • I saw on the Linux/TNT maillist that they are backporting GeForce support to X 3.3.6 - I even saw a patch for it if you can't wait.
  • There's a new extension being introduced in XFree86 4.0 which should make video zooming and similar things very easy; it's imaginatively called the X Video Extension.

    AFAICT, it's still pretty experimental, and I don't know what (if any) hardware is supported, but it sounds very interesting.

    http://www.xfree86.org/snapshots/3.9.16/DESIGN16 .html

  • Slashdot's slogan has been 'News for Nerds. Stuff that matters.' for as long as I can remember. Not just news for Linux users, but news for all nerds. So why's this article headed NVidia releasing OpenGL ICD by End of Year then? They've already released this, just not for Linux. And not everyone here is a Linux user.

    If Rob, Hemos & co. want Slashdot to be a Linux community site, they should say so. If they want it to be a general nerd site which happens to be rather fond of Linux (which is how I would view it right now) then they shouldn't make posts like this which assume that Linux is all that matters to the readers. Otherwise we'll start viewing Slashdot as a Linux community site and I, for one, would stop using it. Nothing against Linux per se, it's just not what I want or use so a Linux news site is entirely irrelevant to me.

    Greg
  • There ain't such a thing. It's either a driver for DRI (unlikely, at the time) or an extension for the 3d hardware glx module (the one with pretty nice G200 and G400 support) for X 3.3.

    Come on, this isn't windows. There's no such thing as an ICD or any other billgatesland TLAs on linux :-)

    I wouldn't be surprised if they released a binary-only driver too. That would be consistent with their previous support of open source (the riva 128 glx source was put thru cpp, for example).

  • NVIDIA had published information on their upcoming Linux plans quite some time ago:
    http://www.nvidia.com/Products.nsf/htmlmedia/sof tware_drivers.html


  • But the fact is that 99.999% of Linux users have NOT written their own OS (out of 10 million only 1, Linus, has) Thus, wouldn't it be better for the people who USE the OS, not just the idea, to have decent drivers? Besided, developer ought not need to know the HW specs, they should be coding to the API. You just say you want source because you stand behind free software, right? But what about when that free software undermine our favorite OS? OpenSource drivers certainly aren't helping Linux. And if Linux wants a mass appeal (people like having their favorite whatever succeed) then it has to embrace both open AND closed source. Doing otherwise is just being closed minded in the other direction.
  • The GLX thet the TNT2 and TNT uses only works at 15-bit and 16-bit colour. Any other depth and the reverts back to software.

    Yes. This was in the release notes. 32bit color would be nice, I agree wholeheartedly.


    Was there any justification for this? It's a limitation of older nVidia hardware, but this driver was released primarily for the TNT. Is there something so hard about detecting the older cards, but allowing 32bpp on newer ones? Or is it a limitation of the current version of GLX?
    Do the Matrox cards allow 32bpp accelerated?
  • You run GL apps on thin clients? The only place where mutli user would be applicable is when you have massive graphics hardware. (My CS lab has a bunch of SGIs that groups share.) But the article was on Linux and the TNT, thus running it on multiuser would be stupid.
  • by Anonymous Coward
    I've got an HP 6200C and I just got it working under Linux. Here are the steps:
    1. Get kernel 2.2.13
    2. Get the USB backport patch from http://www.suse.cz/development/usb-backport/"> and apply it
    3. Get the latest USB scanner driver from http://www.jump.net/~dnelson [jump.net] and apply the patch. You'll have to manually patch drivers/usb/Makefile based on the .rej file.
    4. Follow instructions in drivers/usb/README.scanner
  • Hope they keep it up and don't become another 3dfx.

    Yeah, I sure hope nVidia doesn't start actually delivering useful products like 3dfx does. 3dfx just opened their drivers. Are you hoping that nVidia will keep their drivers closed? The community is about results, not vapor, and right now the way to go for 3d-acceleration in Linux is 3dfx.
  • The GLX homepage was just updated this Tuesday. There's now a GLX Quickstart MiniHOWTO [execpc.com], a link to some RPMs (I'll try using alien on them to see if they work with Debian) and a better FAQ.

    Plus there's a glx-users list for stupid people like me to ask questions on how to use GLX, as opposed to pissing off John Carmack by asking on glx-dev.

    I should have known this would be the case before I posted to slashdot :-)

  • > Woohoo! Quake 3 on my TNT in Linux!

    I'm already running Q3 on my K7-500/TNT2 and it's looking quite good!
  • What am I missing here?

    My TNT2 Ultra has worked nicely for some months now (since this summer), thank you very much. There's a fully open source (GPL iirc) driver built around Mesa and SGI's GLX.

    What is who talking about? Is nVidia talking about a DRI-compliant driver for use with XFree86 4? I would hope they plan to provide one, but if Precision Insight's assorted whitepapers on the subject are on target, porting to DRI shouldn't be that hard ...

    Comments from someone with some level of clue? Is nVidia just re-releasing old news to have something to say at Comdex?
  • ftp://whizbang.penguin powered.com/pub/libMesaVoodooGL.so.3.1 [penguinpowered.com]

    That ftp server had it a few days ago, but it was down last time I checked... if it's not up by tonight I'll toss the lib on the LG ftp server.
  • The problem is that the Utah GLX module isn't compatible with Mesa 3.3 as yet.

    If you checkout Mesa 3.2 it should work. Details for how to do that are in the FAQ.

    The GLX configure script will print a big and ugly warning message if it finds Mesa 3.3 in the latest version.
  • Are they actually developing their own OpenGL implementation for linux? I don't think they're really creating a ICD for Linux since isn't that a windows term?

    You are right, the buzzword to watch for in case of X is DRI.

    See this diagramm [precisioninsight.com] for what to expect. (There is also a less detailed and a poster sized version in the parent directory).

  • Like the title says, 3DFX already did it.
  • One minor problem with something being continuously developed is that occasionally things get broken. It'll probably get fixed by the next snapshot... :)

    Just make sure you have the latest Mesa3.2dev.tar.gz and glx-SNAP-1999????tar.gz, then try with those. Have a look on the GLX mailing lists and FAQs - they've got a very good little HOWTO there - and find out how to set it up.

    When you have the right files, it's pretty easy to setup and install. It works very nicely on my 8MB Matrox G200. It's still not quite as fast as Windows 95's OpenGL, but (IMHO) the picture quality is considerably better - it's not nearly as 'bitty' or dithered-looking in Quake 3.

    Keep trying with it, and you'll be pleasantly surprised when you get it all working. Xscreensaver OpenGL hacks look very nice - sproingies -root looks somewhat surreal...



  • by mvw ( 2916 ) on Thursday November 25, 1999 @12:49PM (#1504452) Journal
    My TNT2 Ultra has worked nicely for some months now (since this summer), thank you very much. There's a fully open source (GPL iirc) driver built around Mesa and SGI's GLX.

    Nope, the glx that is out and works with nvidia cards (and even better with Matrox ones) is not the SGI one but an open effort - albeit a prominent team member is a SGI employee.

    However the upcoming DRI stuff for XFree86 4 is based on a newer glx implementation (we are talking OpenGL over X protocol now) by SGI. At present it is expected that only the hardware driver stuff of the openproject.net glx will make it into XF4 - but who knows.

  • Hemos uses Windows?

    If you're refering to One less reason to dual boot, it's in the quote from ttyRazor, which means Hemos didn't say it.
  • Alright, so Matrox haven't written any drivers for Linux. But how come the GLX people have come up with a superb G200/G400 driver based only on specifications, but have been unable to do pretty much anything with the (somewhat obfuscated) source code already provided by NVidia?

    I would much rather detailed programming information was released than buggy, non-standard OpenGL drivers. It would be much nicer if all hardware 3D in Linux was derived from one codebase (eg Utah GLX) than if each vendor provided their own, open-source OpenGL implementation.

    I bought a graphics card recently. They'd run out of TNTs, so I bought a G200. I'm glad I did so - 3D support in Linux for the G200 has been advancing at an incredible rate compared with that for the TNT.
  • "The sourcecode released by nVidia was Precompiled"

    what? how can sourcecode be precompiled? the stuff nvidia released was glx and xfree86 binaries. they both currently support the tnt(2) if u get the current source anyway.
  • Note that glx is the OpenGL over X protocol, meant primarily to allow an OpenGL app running on host A to draw on another host B. X is a network GUI after all!

    Thus you have the X protocol overhead. This is called indirect rendering in contrast to direct rendering where a client app is running on the same host as the display server. This is expected to be quite faster, and was demonstrated at SIGGRAPH in an early stage.

    This so called DRI is targeted for XFree86 4. See my message above, where I posted a link to a diagramm at p.i., that shows both situations.

  • It was pre-processed to obfuscate it.
  • /me (shudders...)
  • I was under the impression that glx could also write directly to video memory, bypassing the X server. That's why you can't take screenshots of it using a standard X screen grabber.

    That may be only when using the AGP features of an AGP card in X, though.

  • It hasn't hurt Matrox, that they released nice specs so that a lot of people, including John Carmack got interested to hack a driver.
  • The sourcecode released by nVidia was Precompiled, which means that it is halfway between being structured code and real machine code. (My over-simplification)

    The GLX thet the TNT2 and TNT uses only works at 15-bit and 16-bit colour. Any other depth and the reverts back to software.

    The driver didn't access all the features of the chipset. There was no advantage in having a TNT2 with 32Mb and a TNT with 16Mb. (Unless you had a ridiculously large Workspace with 32-bit colour, but that has nothing to do with the GLX code)

    Basically, If you put a TNT-2 next to a comparative Matrox card (G200?) next to a comparative 3Dfx card (Voodoo2?), the TNT-2 would be only marginally better than a S3 Virge. :-(

    Anyway, My interpretation of the "Article/Paragraph" is that nVidia are opening thier ICD for us to look at, and do what we like.
    ("We" being the open-source/Free-software community)

    I wonder if it will be GPLed or BSDed or "n Public Licence"ed.
  • Given NVIDIA's gradual back pedalling on open source I have my doubts... first the open source opengl without documentation, then the closed source (obfuscated) object oriented low level API.

    I am not completely sure what is going on in their minds.

    They seem to want to harvest the free developer resources from the net at one hand, but on the other hand seem to fear to give away too much secrets to their competition by opening up completely (it is an incredibly competetive market after all).

    It is also possible that they don't supply register level stuff, because they fear us to produce crappy drivers, hurting the brand name.

    The material released so far and updated this month is something more high level, that also provides uniform access along several NV chip generations. Could be the reason why the traditional hardware freaks did not pick it up so far.

This file will self-destruct in five minutes.

Working...