Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

3DLabs Launching New GPU 204

h0tblack writes "...or VPU as they've seen fit to call it. The Register is reporting that 3DLabs will be releasing the P10 later this year. It's targeted at workstation and gaming markets with OpenGl2.0 and DX9 drivers having been seeded to developers already. Could be interesting as 3DLabs have been one of the key players in the development of OpenGL2.0. The P10 has over 200 SIMD processors throughout its geometry, texture and pixel processing pipeline stages to deliver over 170Gflops and one TeraOp of programmable graphics performance together with a full 256-bit DDR memory interface for up to 20GBytes/sec of memory bandwidth. More info can be found in the press release." There are also examinations of the new chip on Anandtech, Tom's Hardware, and no doubt many other hardware sites too.
This discussion has been archived. No new comments can be posted.

3DLabs Launching New GPU

Comments Filter:
  • Heh (Score:2, Funny)

    by Delifisek ( 190943 )
    Its look like some one try to re invent AMIGA...

    Damn why we spend these bucks that PC architecture...

    Capitalism...
  • I purchased a dual-pIII600 system with a 3dlabs Oxygen GLX card in june 2000. Ok, so it's been awhile but I still know some individuals running Geforce II-based cards and playing games like Tribes II. My machine balks at TII and wont even load MOH or even certain Direct-X games. I just want to know if 3dlabs will incorporate some of this technology into a good workstation/gaming card so I can trade up my aging system for something better. Plus, I wouldn't be getting stuck with a crappy card in a few years time. Im not gonna pony up another $1k just to get stuck playing CS only again.
    • I'm not sure if this true now, but those older 3d labs cards, while thy had impressiv specs, were not ment for gaming. They were designed for high end 3d modeling. i.e. Autocad drawing an entire design composed of thousands of little pieces. While the two uses have similarities, they are not the same.
  • by dreamchaser ( 49529 ) on Friday May 03, 2002 @09:47AM (#3456842) Homepage Journal
    Press releases can't render anything, to the best of my knowledge. I'll reserve judgement until I get my hands on a review unit. However, this can only be a good thing. Competition drives prices down and features up.
  • Hopefully, this will not turn out to be a flop like the Permedia. I remember waiting for that and proclaiming that it would be faster than anything else when it came out. *cough* NOT *cough*

    The specs were great, but the actual implementation and drivers, well, sucked hard.
    • The specs were great, but the actual implementation and drivers, well, sucked hard.

      Actually, the Permedia was a nice card for it's time. The OpenGL drivers were better than anything else out there (remember, this was back when the Voodoo 1 was king of the hill, and the OpenGL drivers were perpetually beta and you couldn't run in a window anyway).
      • Well, I should have said Permedia 2, at least from a wintel gamers perspective.

        It was especially bad as they were marketing it as a gaming card initially. And, I actually put off buy an Nvidia waiting for it.

    • The specs were great, but the actual implementation and drivers, well, sucked hard.

      Sure, the Permedia wasn't the quickest card on the block in its time, and neither was the Permedia 2 nor the Permedia 3...

      But both the NT and Win9x drivers were absolutely 100% rock-solid, the OpenGL implementation was flawless and very, very fast, and the card supported a whole bunch of features that no other consumer-level chipset at the time supported, like anisotropic filtering, or multiple video overlay windows at once. The RAMDACs were really good on the Permedia 2 also - razor-sharp, much much better than the TNT2 I ended up replacing it with. It was also rather faster at GUI acceleration than the TNT2, which was a surprise and a disappointment.

      Really it was a semi-pro card at consumer-level prices. It would never have been the card you bought if you wanted the ultimate Quake framerate, but it absolutely oozed quality.

      It's the only graphics card I've ever used that hasn't annoyed me in some way, be it dodgy image quality (NVIDIA, S3), unstable drivers (ATI, NVIDIA), bus latency greediness (NVIDIA, S3, Matrox, often leads to choppy, stuttering audio), or just being dog-slow (all the usual suspects - hello Trident, earth calling). I've never used a 3dfx card for more than a few minutes so I can't really comment on them, but I suspect their poor OpenGL support would have annoyed me greatly.

      If only 3Dlabs had 3d-accelerated Linux drivers (preferably open source) I'd buy another one in a heartbeat. I've been disappointed with every other card I've had since my Permedia 2...

      • The Permedia 2, as far as I know, is also the only consumer level card that properly supports OpenGL stereoscopic viewing. Not like the poorly hacked stereo available on some GeForce cards today. I know that not too many gamers use stereo, but for 3d data visualization, stereo can be very useful when done right. The Permedia 2 provided good data vis capabilities and gaming capabilities on a budget even a student could afford!
  • by Wolfier ( 94144 )
    I've been waiting for the next 3DLabs chip for a long time! The last experience with Permedia 2 = Rock *solid* drivers + 100% OpenGL compliance + low power consumption....

    If Creative makes a card with them with OSS Linux drivers and *NO FAN* then I'm sold!
    • how about we just give you a GF2 with the XF86_SVGA server and we'll just RIP THE FAN OFF FOR YOU!

      there you go! you're welcome!

      oh, you DIDN'T want it to catch on fire...

      -c

  • It seems a bit ridiculous that it is 170 GFlops which is faster then quite a few supercomputers and is about 70 or so times faster then the nearest desktop processor. Do they mean something else? or is this really a supercomputer? Oh and I'll throw in the customary, imagine a beowulf cluster of these.
    • Flops = FLoating point OPerationS.
    • It is a very specialized procesor. It does graphics, and it does it very well. If you tried to do any thing else on it, you would probobly go insain trying to work around it's limitations, and the resulting program would most likely run slowly.
    • by Anonymous Coward
      Yes and No -- If your only operations are
      multiplying 4x4 matricies and 4x1 vectors, and you
      pay very close attention to the programming docs,
      then yes you can perform 170 billion floating
      pt ops per second. But it's not something you
      could use as a general purpose processor.
  • I have an old 3dx card.. armed with nasm I know somone that has written in the past a calculator that used this card, I have also had ideas about patching mysql to use it for indexing.. has anyone else done this? i.e. with a 3dfx, GeForce.. etc..? is anyone interested in starting some form of group to look into the possibilitys of patching high end software like mysql and apache to take advantage of such hardware?
    • Apache, like most other server daemons, is more I/O-bound than CPU bound; patching it to use a co-processor won't do much.
    • ...patching high end software like mysql...

      High end? MySQL? BWAHAHAHA! Good one.

    • If you're willing to spend $900 on a graphics chip, wouldn't it be easier to just get a dual processor motherboard? Why try to co-opt the graphics chip to run sql queries? MySql is multi-threaded and can already use multiple cpu's if they're available.

      And what, pray tell, does armed with nasm mean?
  • Extra! Extra! Linux ported to GPU!

    Really, these things are getting massively more complicated than your ordinary P4 or Athlon.

    And think; There's one less layer between the OS and the framebuffer!
    • by Junks Jerzey ( 54586 ) on Friday May 03, 2002 @10:26AM (#3457052)
      Really, these things are getting massively more complicated than your ordinary P4 or Athlon.

      Not really, though. They have simple units, then they put a whole bunch of them on there. They don't need nonsense like branch prediction and register renaming and all that. But they certainly are complicated in their own way.
    • ...where all you have are CPU cards with whatever specialized adapter is necessary to provide the apporpriate electrical connectivity to peripherals.

      Each card is a basically a CPU board with its own memory. The common bus between cards is really a switch to limit card-card contention. One card is the bus master running the kernel. Processes can be shuttled between CPU boards as processing power is available.

      The thing is we're getting to the point where just about every PCI device has a CPU on it (NICs with encryption/acceleration engines, RAID cards). Why not just put high-speed general purpose CPUs on the cards and use it as a highly integratable/segmentable cluster?

      The actual kernel could do more scheduling and less work, since the "NIC" CPU card could theoretically run large parts of the IP stack in addition to the NIC driver, as an example.
      • Is this conceviable? Could you embed a 486 and an 8mbit ROM onto a NIC and have it run its own TCP/IP stack?

        About how much horsepower do you think you would need to do something like IPSec? Is that handled by a secondary processor already anyway?
      • I think that's the opposite of where things are going. We'll see more and more CPUs on a single chip, along with great gobs of ram.
      • My friend and I are working on the specs for something very similar, except that in our design the bus master sat outside of the bus. Either way, we came up with an interesting system where one half of the bus operated as something resembling a token-ring network (except unlike the original it would not choke on a newly inserted card), operating in an asynchronous mode, the other half being a sort of back-door into each card's memory, so that the bus master could mediate DMA transfers of a sort. The other concepts we had were, as alluded to before, for hot-plugging cards.
      • they've had these for a while. You used to could get a sun workstation with an x86 (k6II or Cyrix I think) processor on a PCI slot designed for running apps like office, etc. Now the big fad in server farms are "blade" servers where you stick like 6 pc-on-a-pci-slot together inside one case. You can get them from HP and Terasoft, the parent co. of Yellow Dog Linux, I think.
        • Blade servers and whole-computer-on-a-card solutions are different than what I'm talking about.

          Blades are just space consoldators from what I've seen; there's no common bus for moving data or memory.

          Fitting a whole computer on a card and injecting the display onto the host doesn't really count either. There's usually no way to move data between the environments, and since they run incompatible processors there's no way to offload processing from the host to the card and vice versa. They're often no more than x86 emulation accelerators.

          The system I'm thinking of actually has the blades working together, sharing a commmon bus, potentially sharing memory as well via NUMA type architecture.
          • I've seen old boxes like this.

            Back in around '90-91, DEC was building SMP (up to 4-way) 486's that used a 'corollary-bus'. There were somewhere between 16 and 20 slots on a *VERY* sparse motherboard. Each card had a specific purpose: CPU cards (up to 4), memory cards (also up to 4 I think, possibly 8) and the rest were general-purpose EISA slots IIRC. Typically you'd have SCSI and something akin to a Digiboard for your pre-TCP/IP network. :-)

            BTW - didn't Digiboard RULE?! Best products and support I've ever come across.

            sedawkgrep
  • Beyond3d (Score:4, Informative)

    by linzeal ( 197905 ) on Friday May 03, 2002 @10:01AM (#3456928) Journal
    Beyond3d, home to many respected (and notorious ) workers at various 3d companies such as nvidia, ati, and bitboys are discussing [216.12.218.25] this right now.
  • Bleeding? (Score:2, Interesting)

    by f00zbll ( 526151 )
    Ok, read the anandtech article, but it looks more evolutionary than revolutionary. The differences between GeForce and VPU will only result in performance improvement if the drivers are good. More competition is good, even if I don't have time to play games anymore and can't justify those heafty prices for bleeding edge video cards.

    One great feature is the virtual memory, which should improve the appearance of depth and richness of models. I wonder how much more textures designers can cram onto a model? Does this mean more games will start to utilize multi-pass rendering and ID will rewrite their engine once again for models with massive amounts of textures? I haven't kept up with the latest trend in 3D game technology, so someone more informed can tell the rest of us?

  • A question... (Score:4, Insightful)

    by levik ( 52444 ) on Friday May 03, 2002 @10:06AM (#3456948) Homepage
    I mean I understand that the graphics market right now is hotter than the 1980s arms race, with companies trying to one-up each other constantly... But can somebody tell me if there are products currently on the market that take full advantage of the *current* crop of video cards?

    When Geforce3 came out it didn't have much of a clock speed increase, but boasted features that if taken advantage of by the developers would make the games look *MUCH* better. And yet, the only trend in the gaming industry that I've spotted is cranking up the poly counts.

    • Re:A question... (Score:2, Informative)

      by afidel ( 530433 )
      Yeah Max Payne, Try full detail, 1280*1024*32bpp, 4XAA. That will push even a GF4 Ti 4600 to the point where min framrate is aproaching single digits. Unreal Tournament 2003, 1280*1024*32, no AA averages 38fps on the Ti4600, again lowest frame rate is almost surely well below 30 fps so there will be times that it looks jerky. While the poly counts may be the thing most touted in press releases the thing that most gamers are starting to look at are what kind of performance can I get with all the goodies turned on.
      • 1280x1024 with 4xAA is a little overkill. you can't even tell the difference between 1xAA and 4xAA when running at such a high resolution... in fact the only discernable difference wil be the frame rate. running at 1xAA will look the same and perform much better.
    • I'm hoping to recieve the new game Morrowind (sequel to Daggerfall, made by Bethesda) today or tomorrow. It uses pixel-shaders for rending water. If you run in on a GF2, the water looks much worse than if you run it on a GF3.
      • Re:A question... (Score:2, Insightful)

        by Steveftoth ( 78419 )
        Which means of course that they wasted their time yet again!

        Why do the game companies think that we really care about how cool neato wow the water looks? If it looks vagely like water then yeah, I'll think that's it water and keep on playing the game. You don't have to wow me with water effects, modeling every drop of water as it is absorbed by my characters clothes in the game.

        BethSoft had better make this game work unlike Daggerfall. Daggerfall sucked unless you were some insane fanboy willing to put up with the constant crashes and headaches caused by this buggy piece of crap. You had to be in love with the concept of the game to truely like it. I'm tempted to actually buy a X-box for it because I don't trust them with my PC to play it.

        I think that these new GPUs are too powerful. As nobody can possiably generate the artwork that will use them quickly enough. It takes much longer to generate a 100,000 poly model then a 5000 poly model in a program like 3d studio max. (assuming that they are of equal quality) It's going to be a couple more years until we see any games really taking advantage of these new features.
    • Maya, Lightwave 3D, 3D Studio MAX.....

      The Gaming industry tends to be behind the curve in utilizing the more advanced features of a card since 90% of their audience is still using the one of the several previous generation cards out there. My aging gaming box only has a TNT2 Ultra, and I havent noticed a sudden lack of ability to play recent titles.

      Given the price of the card it seems to be targeted at the smaller animation shops with a few animators running various 3d apps(most of which cost between 2x to 9x the $900 model per seat) on NT Boxen, rather than the Quake "I need to run at 1500 FPS for nothing more than my own phallic extension reasons" crowd.
      • >Maya, Lightwave 3D, 3D Studio MAX.....

        But are these programs limited by the power of the card, or the ability of the CPU to feed it information?

        Aside from simply supporting the features of OpenGL, are the GeForce 4Ti's slowing down the 1.9 gig Athlons, or the other way around?

        -l
    • Re:A question... (Score:4, Interesting)

      by Junks Jerzey ( 54586 ) on Friday May 03, 2002 @10:46AM (#3457166)
      I mean I understand that the graphics market right now is hotter than the 1980s arms race, with companies trying to one-up each other constantly...

      That describes the market a few years ago, but no more. These days, with GeForce 2 MXs being dirt cheap and no one having performance issues with them, no one--except neurotic geeks--gives any thought to updating their video cards.

      But can somebody tell me if there are products currently on the market that take full advantage of the *current* crop of video cards?

      The answer is an emphatic "no." I'm a game developer, and we were focusing on the Voodoo 2 as the low end until very recently. And the Voodoo 2 is still a much more powerful card than people realize, providing you work *with* it and don't just ask it to render 50,000 polygons per frame. I don't think we ever got to the bottom of the performance available in that card, and we certainly, certainly, never got anywhere near what you can really do with later cards, like the original GeForce. All of the fancy stuff you can do with the GeForce 3--mostly based around vertex shaders--is not backward compatible with 90% of the market, so we never touched it.

      Fanboys don't want to hear that their cards aren't being pushed anywhere near the limits. The are much happier to have poorly written games that have high polygon counts and bad art, because then they can justify the money they spent on a new computer and/or video card.
    • I'm pretty sure ID software will deliver quality as promised with the new Doom3. They intend on creating worlds where everything uses the same interface, everything will have dynamic lighting, everything will have pixel shading.

      They said that the game will run at rates of about 30 with a GF3, not due to bad architecture, but maximizing the card can give. They said that the high end computers at the time of shipment will still run the game at around 60-70fps, stable.

      Anyways, until u can't render stuff like Final Fantasy (the movie) in real time, you aren't there yet ;)
      • Carmack also said that Doom 3 won't have the frantic pace of the Original Doom or Quake, but rather a more cinematic moody feel. So 30 FPS should be acceptable (Movies are less than tat)
        • As already noted before, the fact movies are running at 30 fps doesn't mean 30 fps in Computer graphics are sufficient.

          Life, as it is, is fluent "infinite FPS". When you capture life on video, you capture everything in that 1/30 of a second, including all movement.

          If u look on a frame in a movie, u see that everything is blurry, but when it's all running, it's clear.

          Computer graphics are created frame by frame, "like life", so to get the maximum fluidity, like in real life, you need this infinite FPS.

          I might have written this draftly, but I simply can't find the page I read all about it.
    • When Geforce3 came out it didn't have much of a clock speed increase

      Not over the GF2 Ultra series, but it was a pretty big jump from the MX and GTS cards most people had. In addition to the HUGE FPS jump in games like Quake III, it had all those eye-candy programmable things that are going into things like Aquanox and The New Doom (tm). Also, the memory increase to 64 then 128 megs of DDR graphics RAM allows for insanely better Anti-Aliasing at "normal" gaming resolutions like 1024x768. The NV25 core (GF4 Ti series) increases this further, where you can turn on full-scene anti-aliasing and still get killer performance in your old games.

      I only play Quake 3 and RTCWolfenstein on a regular basis, but my GF2 GTS (on an Athlon XP 1600+) pushes a masochistic 0.3 FPS in Quake 3 demos with 4xFSAA. Testing with the new card (128 megs of 600MHz graphics RAM, I never could have imagined in 1999) shows that I'll turn on 8 way Aniso, 4xFSAA and STILL gank 60fps on my 1024x768 LCD. Starting at $199, which is my limit for a graphics card.

      And trust me, there is a TON of difference in visual quality with 4xFSAA on using a 15" LCD.

      So yes, the programmable pixel shading pales against the power of prettier pictures in your "old stand-by" games, like Q3A. (Alliteration is your friend.)
      • Do you have more fun, or it just that it looks a bit prettier? I found out i enjoyed Doom I and (specially) Doom II more than any other game. Quake was better as it had more freedom, breaking the 2D maps structure.

        Unreal was beautifull and i like the music. So i enjoyed it on my (rip) Voodoo II. After that, better graphics just make me bored after the initial "cool graphics" experience.

        As another guy already said, not even the Voodoo II has been maxed out yet. AA looks _definetly_ good, but are those games more fun? If the game experience (what you do, how inmersive) doesn't get better, then better graphics just ruin the game.

        Another thought: I still like the pixel in Doom II combined with high framerate. It's like real life through a wet lens. But a high framerate with AA and everything, if the game is not really really we done (Unreal II level or upper) just looks like a crappy movie seen through a high quality microscope.
    • Have you ever tried playing Everquest: Shards of Luclin with full new models on (they upped the models), high quality textures, and all other effects on in a 60-80 person raid?

      I don't think a GeForce 4 4600 could handle it so yeah, they can use the processing power =).

  • Well - is it? If the boards will be 600 bucks in december, they'll start coming down around the time they need cheap boards for PS3. I'm guessing about 2004-2005?
  • ...To people who CANNOT possibly have DX9 yet. Being on the 1st cut beta list, I was informed just yesterday, by the DX9 group, that the initial Beta for DX9 is nearing completion... LOL.
    • They build for DX9 based on the written specifications. The DX9 software might be nearing first beta just now, but the specifications and requirements for what is supposed to be in DX9 would be completed long before the software.
    • Hardware manufacturers also get copies of alpha builds to play with long before any beta testing occurs. M$ does whatever it can to ensure that there is a library of titles for their latest version of DX available the day it is released to the public.

      Also keep in mind hardware manufacturers have a lot of input into the new features implemented in DX and M$ is more than happy to bring them in to consult as the production progresses.

      It would not surprise me if the 3DLabs people have had alpha copies of DX9 to play with for a few months now.
  • but how is it for reliability? my visiontek geforce3 beats the hell out of my friends off brand geforce 4....... besides, i just shelled out $180 for my graphics card, and tuition is coming up for summer semester.......
  • by gargle ( 97883 ) on Friday May 03, 2002 @10:27AM (#3457059) Homepage
    It's worth pointing out that Creative has bought 3d labs, and Creative's CEO Sim Wong Hoo has every intention of taking 3d Labs out on an aggressive push into the consumer 3d market. See article [asia1.com.sg].
  • I cant wait until we get some cards and reviews of those cards - but what does a press release mean?
    Absolutely nothing
    Anyone can take any product and make a glowing press release over it to get everyone excited about it, but that doesnt say anything for the silicon, or its support chipsets, or its drivers when it finally reaches production

    until then ... ... ...
  • Are the OpenGL 2.0 specifications done, yet?

    I hope having a chip out like this doesn't affect the adoption of OpenGL 2.0 by other card/chip manufacturers. I also hope OpenGL 2.0 won't be to 3DLabs what Direct3D/X is to Micro$oft.
    • Re:OpenGL 2.0 (Score:2, Informative)

      Well, it depends on who you ask. According to SGI, it's OpenGL 1.3, but a few companies call it openGL 2.0. OpenGL 1.3 does have some impressive advantages, so it doesn't really matter. Remember, OpenGL isn't just specifications, it's a library, and it works a lot better than directx releases. (i.e. anything can be rendered in software, so you don't need to mess with libraries everytime a game or card is realeased.)
  • Surely the obvious thing that is missing here is a standard in the CPU instruction set. Can anyone see NVidia opting to clone 3DLabs' chips in order to maintain binary compatability of shading algorithms? And can anyone see a games developer wanting to release source code for their shading algorithms just so that it can be runtime compiled onto different (future) hardware architectures?
    • CPU instruction sets are a moot point. Vertex Shader instructions pass through the drivers first. OpenGL 2.0 will include a shading language which is hardware independent. The drivers will then convert this independent shading language into the machine language for their respective GPU/VPU.

      You can find a bit of info on the OpenGL 2.0 shading language over at
      3DLabs' white papers section [3dlabs.com]. There is also quite a bit more information on OpenGL 2.0 there as well.
      • The only reason one would want instruction sets to be similar between GPU/VPUs would be to develop a single driver that would function across all compatible hardware.

        A neat idea, but not feasible within constantly evolving graphics processor industry.
    • by marm ( 144733 ) on Friday May 03, 2002 @11:49AM (#3457548)

      OpenGL 2.0 addresses exactly your concerns - a vendor-neutral shader programming language, and this is precisely why you're seeing 3Dlabs pushing hard for it. It seems they will be first to market with a fully programmable graphics pipeline, and they need the software technology to go with it...

      DirectX 9 also addresses the same issues and provides a standard shader language (actually DirectX 8.1 has a standard shader language already, but it lacks a certain amount of the programmability that will be present in DirectX 9), but there are a lot of reasons for the graphics card vendors to favour OpenGL over DirectX. For instance:

      • There are a lot of users of high-end 3D hardware for whom Windows is anathema. Think about all the effects shops that traditionally have used IRIX and are now moving over to Linux... DirectX ties the cards to Windows, OpenGL does not. This is a growing, and more importantly, prestige market for high-end PC 3D vendors... Linux is bringing them to the PC from SGI/IRIX solutions, and is bringing them sales with it. I think NVIDIA understand this one, just a shame few of the other 3D vendors do yet...
      • There are an awful lot of 3D apps that are heavily tied into OpenGL and rewriting them for DirectX would be a serious undertaking, whilst modifying them for OpenGL 2.0 to take advantage of the new shader features and extra programmability of the graphics pipeline will be a relatively simple task in comparison.
      • What if Microsoft decided to get into the 3D market by buying one of the existing major players? Sure, Microsoft might be responsive to the 3D vendors now, but I suspect they wouldn't be if they had a vested interest in one of the players. Perhaps it seems unlikely, but it seems Microsoft has ambitions in the hardware business - witness the X-Box. It's a doomsday scenario from the point of view of the 3D vendors, sure, but no doubt it's something that a few vendors have thought about.
      • Even if Microsoft doesn't do such a thing, OpenGL allows them 3D vendors room to breathe - they can implement new features as they please without Microsoft having to give them the nod.

      Hopefully OpenGL 2.0 will see a resurgence in OpenGL use. I don't like the idea of the 3D market being controlled by Microsoft, and I don't think the 3D vendors do either. Kudos to 3DLabs for leading the way!

  • TI 34010... (Score:4, Insightful)

    by BaronM ( 122102 ) on Friday May 03, 2002 @10:52AM (#3457213)
    ...was the first PC-market, full programmable graphics chip, as far as I know.
    Any website proclaiming full programmability as new or revolutionaly is simply demonstrating a lack of historical knowledge. 34010/34020 based boards competed with the first-gen fixed function graphics accelerators for Win 3.x, but couldn't compete on price/performance with the fixed function BitBLT engines from S3 et al, and the flexibility of being fully programmable meant nothing to PC users who were accustomed to dumb EGA/VGA cards.
    • Re:TI 34010... (Score:3, Interesting)

      by morcheeba ( 260908 )
      That brings back memories!

      The 34010 kicked butt! It was used by Atari's Hard-driving game. It had a lot of neat features, including hardware X/Y addressing (i.e. move x,y,pixel), bit-level addressing (you could twiddle any bit in memory, or write a word/byte on any boundry), and built-in simple graphics operations (copy a block of memory, xor source & destination, use larger of the two, subtract, union, difference, add but don't overflow, etc)

      But what was *REALLY* cool was the math coprocessor, the 34020. It was blazingly fast (almost, but not quite as fast as the industry-crushing i860 IIRC), but it featured a programmable microcode so you could create your own instructions and get every ounce of performance out of the machine. I'm still looking for a processor that will allow that... we're getting those with modern NPUs (cradle [cradle.com], intel IXP1200 [intel.com]), but these generally lack floating point functionality.
  • End of VGA (Score:2, Offtopic)

    by skroz ( 7870 )
    The article on Tom's mentions the end of VGA as the common denomenator for video, but mentions no replacement. So what's the new standard? When it comes to a standard video format, we really need to have SOMETHING common to all (most) platforms...
    • Re:End of VGA (Score:2, Interesting)

      by orz ( 88387 )
      I don't know. But I do recall from reading the VESA 2.0 standard a while ago, that VESA 2.0 compliance does not require VGA compatibility. That would be a possible route to go.
    • I think Tom's mention of the "End of VGA" is more metaphorical and perhaps wishful rather than practical. 3DLabs' P10 VPU still has VGA support.

      I think Tom mentions this simply because the P10's capabilities to handle multiple requests is a good solution to the requirements set forth for M$'s next-gen GUI, Longhorn.

      P10 shows Longhorn is possible and that VGA is no longer needed. This is the "End of VGA". however I'd expect legacy support for VGA in video cards for a long time to come.

  • how many next-gen cards have come out since bit-boys said that they've reached silicon stages are are persuing a fab plant? hahahaha... Oy!
  • a company that is actualy inovating and producing a new product rather than repackaging the old on with more memory and ramped clock speed *Cough* NVIDIA*Cough*
  • From the article at Tom's [tomshardware.com]

    blockquoth the poster (evermore with emphasis added):


    It won't be until Creative Labs has fully acquired 3Dlabs, and is ready to announce its P10 boards for Christmas 2002 that we will know how the P10 is going to impact the mainstream desktop and the gamer, although 3Dlabs is convinced that the Creative P10 boards will be competitive with Nvidia and ATi products on the market at that time. Knowing Creative's sales muscle and reach, a Creative graphics board needs only to be competitive, and not necessarily better, in order to be a viable alternative to the two horse race we have right now.

    However, there are some concerns. Creative has tried repeatedly to establish a strong foothold in the graphics business and has been pulled in and out of the market , particularly in North America. 3Dlabs has been aiming to find a way into the mainstream with its technology for a number of years and has repeatedly fallen short of delivering a competitive product. Can this marriage work?


    Now then, the emphasized bits beg the question: Why has Creative gained and lost its footholds in these areas?

    For this Creative customer, the reason is and has always been (across all product lines) one, very important issue: Software.

    When and where the Creative development machine manages to mate decent, uncluttered, non-glitzy, tweakable, and trouble-free software (very very seldom IMO/) to the excellent-to-amazing hardware that they are deservably famous for, the results have been very good indeed.

    However, in the normal course of events, Creative's hardware ships with installation, driver, ancillary programs, updaters, bundled "features", and enough just outright useless crap to annoy any self-respecting consumer. And while I admit that this occurs largely on the Windows platforms, you should admit to yourself that that's Creative's largest area of concern. Fortunately, they haven't yet figured that they could push for inclusion of enough Creative ad-ware to sicken a telemarketer drone into the driver packages for other platforms.

    So, in this reader's experience, the issue is simple. Too much software that users don't want or need, too many features that won't work without all the glitzy junk (anyone like using the LiveDrive product, it's great, but the software to make it worthwhile--remote control--is a cast-iron bitch, crashy, seldom-updated, and too tied to useless trash in the installation). Now these issues seem somewhat prevalent along Creative's product lines, and they're killers.

    Fortunately, the answer is simple. Creative needs to give the people who buy their hardware good, stable, and full-featured drivers without the need for a dozen attendant Creative-logo-displaying bits of crapware. If that parts' impossible, then it'd at least be nice to be able to grab reference drivers from the chipset manufacturer (how many people don't use NVidia's Detonator drivers in favor of the card-vendor's?)

    .

    Failing those... license the hardware designs to vendors who'll give us good, honest, and stable software. Of course, they can always continue to lose business to the competition, afterall, it's . . . "good for the market".

  • is releasing a redigned chip in august [com.com]? traditionally, nvidia has been releasing new chips in the spring and then introducint a beefed up version of the same chip in the fall. This year they are introducing a "fundamentally new architecture" only 6 months after they anounced the geforce4. my guess is that they had a feeling that ruling OpenGL 1.3 and DX8.1 isn't enough and that this next chip will keep them competative with upcoming chips like this new one from 3d labs.

    but that's speculation.

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...