Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Upgrades Hardware

New Multi-GPU Technology With No Strings Attached 179

Vigile writes "Multi-GPU technology from both NVIDIA and ATI has long been dependent on many factors including specific motherboard chipsets and forcing gamers to buy similar GPUs within a single generation. A new company called Lucid Logix is showing off a product that could potentially allow vastly different GPUs to work in tandem while still promising near-linear scaling on up to four chips. The HYDRA Engine is dedicated silicon that dissects DirectX and OpenGL calls and modifies them directly to be distributed among the available graphics processors. That means the aging GeForce 6800 GT card in your closet might be useful once again and the future of one motherboard supporting both AMD and NVIDIA multi-GPU configurations could be very near."
This discussion has been archived. No new comments can be posted.

New Multi-GPU Technology With No Strings Attached

Comments Filter:
  • No strings? (Score:5, Funny)

    by Plantain ( 1207762 ) on Tuesday August 19, 2008 @08:12PM (#24666643)

    If there's no strings, how are they connected?

  • AMD and NVIDIA?? (Score:2, Interesting)

    by iduno ( 834351 )
    is that suppose to be ATI and NVIDIA
  • Interesting (Score:5, Informative)

    by dreamchaser ( 49529 ) on Tuesday August 19, 2008 @08:12PM (#24666649) Homepage Journal

    I gave TFA a quick perusal and it looks like some sort of profiling is done. I was about to ask about how it handled load balancing when using GPU's of disparate power, but perhaps that has something to do with it. It may even run some type of micro-benchmarks to determine which card has more power and then distribute the load accordingly.

    I'll reserve judgement until I see reviews of it really working. From TFA it looks like it has some interesting potential capabilities, especially for multi-monitor use.

    • Re:Interesting (Score:4, Insightful)

      by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Tuesday August 19, 2008 @08:19PM (#24666731) Homepage Journal

      It seems to be using feedback from the rendering itself. If one GPU falls behind, it sends more work to the other GPU. It may have some kind of database of cards to prime the algorithm, but there's no reason it has to run extra benchmarking jobs.

      • True, but I was thinking that the software could profile/benchmark both cards at startup. What you say makes sense though, and like I said I only gave TFA a cursory viewing.

        • Especially since different cards will perform different operations faster then others. One I specifically ran into was the nVidia 5(6?)600 which although a fast card at the time, its PS2.0 implementation was so slow to be practically unusable and many games overrode it and forced it back to PS1.1.

      • Re:Interesting (Score:5, Informative)

        by x2A ( 858210 ) on Tuesday August 19, 2008 @09:00PM (#24667093)

        "It seems to be using feedback from the rendering itself"

        Yep it does look like it's worked out dynamically; the article states that you can start watching a movie on another monitor while scene rendering on another, and it will compensate by sending fewer tasks to the busy card. Simplest way I'd assume to do this would be to keep feeding tasks into each cards pipeline until the scene is rendered. If one completes tasks quicker than the other, it will get more tasks fed in. I guess you'd either need to load the textures into all cards, or the rendering of sections of the scene could have to be decided in part by which card as textures it needs already in its texture memory.

        I guess we're not gonna know a huge amount as these are areas they're understandably keeping close to their chests.

    • Perhaps it keeps shovelling work to each GPU until it notices an increase in time getting the results back, and then surmises that GPU is at it's maximum load?

    • Re: (Score:3, Interesting)

      by TerranFury ( 726743 )

      I gave TFA a quick perusal

      FYI, this is a very common mistake in English, and loathe though I am to be a vocabulary Nazi, I think pointing it out here might benefit other Slashdot readers:

      The verb "to peruse" means to read thoroughly and slowly. It does not mean "to skim" -- quite the opposite!

      (Unfortunately, it seems that even the OED is giving up this fight, so maybe I should too.)

      That's it for this post. Cheers!

    • Re:Interesting (Score:5, Insightful)

      by ozphx ( 1061292 ) on Wednesday August 20, 2008 @01:12AM (#24668983) Homepage

      Put it this way, if it was a disparate CPU multiprocessor board, and the summary said "Perhaps my p4 will now be useful again", everyone would be laughing.

      A 6800GT would be an insignificant fraction of a new card, and would still be under 10% of a $50 no-name 8 series (while still sucking down the same wattage).

      Considering that matched SLI is usually a waste of money - you can buy that second card in a years time when your current one shows age, and end up with a better card than twice your previous one (and supporting Shader Model Super Titrenderer 7), which your old card can't do), I'm not sure how this is going to be of much benefit.

      If it was useful to jam a bunch of cheap chips in then the card manufacturers would be doing it on more of a scale than the "desk heater dual" cards (which are basically SLI-in-a-box) at double price. You can't get a card with 4x 6800 chips at $5 each, because they'd be destroyed by the $10 8 series chip on a $50 card.

      • "Put it this way, if it was a disparate CPU multiprocessor board, and the summary said "Perhaps my p4 will now be useful again", everyone would be laughing."

        I'm sure the article was referring to using same generation tech with different GPU's, and the the fact that you could use cards of different speeds/generations was just a spin off, unless there is actually some kind of application where this is useful.

      • by frieko ( 855745 )
        Where can I get more information about this "Super Titrenderer 7"?
      • Re: (Score:3, Interesting)

        An especially troublesome aspect of pairing mis-matched cards: whatever this technology is, it can't use alternate-frame rendering (AFR, where alternating frames are sent to alternating cards).

        You can't make AFR work with an unbalanced pair of cards because the fastest framerate you can get is limited to 2x the speed of the slowest card. Let me explain: if you paired a new card (100fps) with one that had 1/2 the speed (50fps), if you had perfect scaling, you could technically get 150fps from the pair. But

    • Re: (Score:3, Interesting)

      What about the fact that different cards give different results for texture filtering? Specifically their choice of mip level and anisotropic filtering. Think circle vs square differences.

      Hell, some cards implement anti-aliasing differently to others.

  • Someone port java to opengl.

    Seriously. That would rock.

    • Re: (Score:3, Insightful)

      by jfim ( 1167051 )

      Someone port java to opengl.

      Seriously. That would rock.

      You can use OpenGL from Java with JOGL [java.net]. Or were you thinking of running a Java or J2EE stack on your GPU? (That would be a really bad idea, in case there were any doubts)

      • Re: (Score:3, Funny)

        by billcopc ( 196330 )

        No no, you're thinking "port OpenGL to Java". I want to see a Java VM written in OpenGL shader language.

        Maybe having 384 lame little stream processors will make Java fast enough to compete with... um... Borland Pascal on a Cyrix 386.

      • by rbanffy ( 584143 )

        It really depends on the architecture of the GPU's processing elements. An Intel Larrabee would have little trouble with Java. A Cell SPU port would be a bitch to do.

  • by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Tuesday August 19, 2008 @08:16PM (#24666693) Homepage Journal

    Can it work with Linux or OS X?

    • by arekusu ( 159916 )

      (TFA:) ...the operating system prevents multiple graphics drivers from running 3D applications at the same time.

      Multiple GPUs from different vendors could work if they ported this technology to OS X... where multiple graphics drivers have happily coexisted for years.

      (Getting an arbitrary application to understand it is running on multiple GPUs is a whole separate problem...)

  • by Anonymous Coward
    Ack! Will this outdate my math coprocessor?
  • ...

    All your GPU are belong to Lucid!

    (sorry guys)

  • by Daimanta ( 1140543 ) on Tuesday August 19, 2008 @08:29PM (#24666837) Journal

    what is attached though:

    ints
    booleans
    longs
    short
    bytes

  • by TibbonZero ( 571809 ) <Tibbon&gmail,com> on Tuesday August 19, 2008 @08:30PM (#24666839) Homepage Journal
    So its obvious that these cards could have been working together now for some time. They aren't as incompatible as AMD and NVidia would like us to think. Of course this leaves only one course of action; they must immediately do something "weird" in their next releases to make them no longer compatible.
    • I thought that too, hopefully NVidia turns around and says "Ok, here is a cheaper version of model xyz, stripped down a bit, so you can afford to buy 4 from us for this little trick"

      I'm not holding my breath but it would be worth the money methinks, even if they only sold them in 4 packs.

    • by im_thatoneguy ( 819432 ) on Tuesday August 19, 2008 @09:06PM (#24667135)

      Don't you mean Wierd(er).

      The reason NVidia requires such symetrical cards isn't just because of speed and frame buffer synchronization but also because different cards render different scenes slightly differently. This is the reason why OpenGL rendering isn't used very often in post production. You can't have two frames come back to you with slightly different gammas, whitepoints, blending algorithms etc etc.

      I'm actually very very curious how they intend to resolve every potential source of image inconsistancy between frame buffers. It seems like it would have to almost use the 3D Cards abstractly as a sort of CPU accelleration unit not an actual FrameBuffer generator.

      • by LarsG ( 31008 ) on Tuesday August 19, 2008 @09:41PM (#24667371) Journal

        From the screen shots and the description, it sounds like this thing takes the d3d (or ogl) instruction stream, finds tasks that can be done in parallel and partition them up across several cards. Then it sends each stream to a card, using the regular d3d/ogl driver for the card. At the back end it merges the resulting framebuffers.

        What I'd like to know is how they intend to handle a situation where the gpus have different capabilities. If you have a dx9 and a dx10 card, will it fall back to the lowest common denominator?

        Also, what about cards that produce different results? Say, two cards that does anti-aliasing slightly different. The article says that Hydra will often change the work passed off to each card (or even the strategy for dividing work amongst the cards) on a frame by frame basis. If they produce different results you'd end up with flicker and strange artefacts.

        Sounds like interesting technology but unless they get all those edge cases right...

        • The way I interpreted this was that say you have a 800x600 screen with 4 gpus. It would render 4 200x150 screens, 1 in each gpu and piece them together. So in theory in wouldn't matter about how each card renders it necessarily (DX9vsDX10) just that it does its portion of the screen. Sure artifacts will exist I'm sure, but so be it for the speed boost and this new technology?
    • by x2A ( 858210 )

      "aren't as incompatible as AMD and NVidia would like us to think"

      Nope, only incompatible enough to need extra hardware, translation layers, drivers, and all the R&D required to produce 'em...

    • You can already pair up unmatched AMD cards as long as they both have crossfire capability. I can't recall the exact details offhand, but I remember reading benchmarks of it on Anandtech some time ago.
  • My god... (Score:4, Interesting)

    by bigtallmofo ( 695287 ) * on Tuesday August 19, 2008 @08:31PM (#24666865)
    Next you'll need a 1,000 watt power supply just to run your computer. How long until my home computer is hooked up to a 50 amp 240 volt line?

    I mean, if one GPU is good and two GPUs are better, does that mean 5 are fantastic?

    I used to have a Radeon 1950 Pro in my current system, which is nowhere near the top of the scale in video cards (in fact, it's probably below even average). It was so loud and literally doubled the number of watts my system took while running (measured by Kill-a-Watt). I took it out and now just use the integrated Intel graphics adapter. Man, that was fast enough for me but I don't play games very often.
    • by Kjella ( 173770 )

      Power saving is as usual reserved for the higher margin laptop market, much like advanced power states on the CPU. There's no good reason why a card should have to draw that much all the time when idle.

    • Re:My god... (Score:4, Insightful)

      by x2A ( 858210 ) on Tuesday August 19, 2008 @09:30PM (#24667311)

      "How long until my home computer is hooked up to a 50 amp 240 volt line?"

      Mine already is... how do you usually power your computer?

      • Re:My god... (Score:4, Informative)

        by Bureaucromancer ( 1303477 ) on Tuesday August 19, 2008 @10:23PM (#24667719)
        Presumably he's North American, we (among a few other places) use 120v lines, 240 is reserved for special high power circuits, generally only used for dryers, stoves and refrigerators (and only some of the first two).
      • It might be 240 volt, but if it's 50 amps at the wall socket you might just be doing it wrong. I very seriously doubt your computer needs 12,000 VA of power.

        • by x2A ( 858210 )

          You are most correct; I recognised the 240V bit but no idea about the amps so my brain just skipped that bit :-p but the fuse in the plug is rated much lower than that so of course the comp's not drawing anything like that amount or the fuse would just blow :-)

  • GeForce 6800 GT (Score:5, Insightful)

    by Brad1138 ( 590148 ) * <brad1138@yahoo.com> on Tuesday August 19, 2008 @08:36PM (#24666913)
    How many people feel this is an old card that should be in a closet? If your not a hard core gamer that is a very good video card. My fastest card (out of 4 comps) is a 256meg 7600GS (comparable to a 6800GT) on an Athlon 2500+ w/1 gig mem. Plays all the games I want without a prob and is more than fast enough to run any none game app.
    • I have a 6600GTS stock overclock which plays games well on my old AMD4000+ PC. While I payed a lot of $$ back in the day, the card is worth $40 now.

      I can't help but think this chip would cost more than it's worth. I like the idea, but I also liked the idea that I could purchase another 6600 GTS and not loose the investment in my original. That didn't happen for me. I am much better off purchasing a current generation card than buying two 6600GTS.

      So the question is how much will it cost to be able to kee

    • Re:GeForce 6800 GT (Score:5, Insightful)

      by Pulzar ( 81031 ) on Tuesday August 19, 2008 @09:20PM (#24667223)

      Plays all the games I want without a prob and is more than fast enough to run any none game app.

      *Any* app? Try getting an HD camcorder and editing some video of your kid/dog/girlfriend/fish and see how well your PC does. It's easy to make generalizations about other people based on personal experience. Resist the urge to do it.

      • It's easy to make generalizations about other people based on personal experience. Resist the urge to do it.

        You might as well tell a fish not to swim or a bird not to fly. Humans always, constantly, must make generalizations based upon personal experience, because they haven't experienced life as anybody else (excluding believers of reincarnation, of course). Next time, just politely correct them without the snarky comment. [xkcd.com]

    • It seems pretty ridiculous to characterise anyone who wants to play a new game at a playable framerate as a 'hardcore gamer'. Mass Effect, Bioshock, Supreme Commander, Rainbow Six Vegas and so on are all good games for both casual or more frequent players, but would be completely unplayable on that hardware. Anyone wanting to play those games would probably feel that way.
      • Here [evilavatar.com] are the system specs for BioShock, my system meets their min. sys. req. and here [direct2drive.com] are the req. for R6 Vegas, my video card meets their min. sys. req. and my cpu is barely under, I didn't check the other games. There may be a couple games I can't play but there are 1000's I can.
        • by Pulzar ( 81031 )

          Minimum system requirement means the game will not die on you when you run it, not that it will be playable in any reasonable sense :). Recommended system requirements is what is, well, recommended in order to properly play the game.

          And, BioShock is already a year old game!

    • I actually do have a 6800 GT just sitting in my closet. Normally I rotate down video cards through my older computers when I buy a new one (since video cards tend to obsolete a lot faster than the rest of the computer), but I only have one computer with PCI Express.

      The 6800 GT is/was a great card, but I wanted to be able to play Oblivion, Mass Effect, and yes, Crysis, at full monitor resolution. Good video cards are not as expensive as they once were; I ended up getting an 8800 GT in the spring for aroun
  • by RyanFenton ( 230700 ) on Tuesday August 19, 2008 @08:38PM (#24666927)

    Power supply units only supply so much energy, and before then cause interesting system instability.

    Also, given the increasingly growing cost of energy, it might be worth buying a newer generation card just for the sake of saving the energy that would be used by multiple older generations of graphics cards. Not the newer cards use less energy in general - but multiple older cards being used to approximate a newer card would use more energy.

    I guess power supplies are still the underlying limit.

    As an additional aside, I'm still kind of surprised that there hasn't been any lego-style system component designs. Need more power supply? Add another lego that has a power input. Need another graphics card? Add another GPU lego. I imagine the same challenges that went into making this hydra GPU thing would be faced in making a generalist system layout like that.

    Ryan Fenton

    • That's why there are bigger power supplies. I can find 1200W ones without any trouble, or even a 2000W one at a major online store, though that one would be tricky to fit into most cases as it's double-high, not to mention the thing will run you $650.

    • While TFA does not give any prices, the Hydra chip and a mainboard with multiple PCIe x 16 slots (that is the high bandwidth variety you want for graphics cards) cost extra money too.

      As an example, the Asus M3A32-MVP Deluxe (4 PCIe x 16 slots) costs 144 Euros at my preferred online store, while cheaper ASUS boards with only one PCIe x 16 slot costs 60-70 Euros. Add the Hydra chip, and I guess you'll end up with a price difference over 100 Euros. Which will pay for a midrange modern graphics card that equals

  • by Anonymous Coward

    Why does the summary include two links to the same article? If there are two links, shouldn't there be two articles?

    And why does the summary link the phrase "allow vastly different GPUs to work in tandem?" Not only isn't it a quote from the article, it actually contradicts the article. The article says "To accompany this ability to intelligently divide up the graphics workload, Lucid is offering up scaling between GPUs of any KIND within a brand (only ATI with ATI, NVIDIA with NVIDIA)." How did anyone g

    • by Michael Hunt ( 585391 ) on Tuesday August 19, 2008 @09:32PM (#24667329) Homepage

      > How did anyone get "vastly different GPUs" from this?

      Presumably because (for e.g.) a G70-based 7800 and a G92-based 8800GT are vastly different GPUs.

      G70, for example, had two sets of fixed-purpose pipeline units (one of which ran your vertex programs, and one of which ran your fragment programs,) a bunch of fixed-function logic, some rasterisation logic, etc.

      On the other hand, G80 and above have general purpose 'shader processors' any of which can run any pipeline programs (and, afaik, runs the traditional graphics fixed-function pipeline in software on said SPs), and a minimal amount of glue to hang it together.

      About the only thing that current-generation GPUs and previous-generation GPUs have in common is the logo on the box (this applies equally to AMD parts, although the X19xx AMD cards, i'm told, are more similar to a G80-style architecture than a G70-style architecture, which is how the F@H folks managed to get it running on the X19xx before G80 was released.)

  • No strings. (Score:5, Funny)

    by jellomizer ( 103300 ) on Tuesday August 19, 2008 @08:51PM (#24667027)

    But it looks like it will need plenty of threads to work though.

  • Well, is it?
  • That means the aging GeForce 6800 GT card in your closet might be useful once again and the future of one motherboard supporting both AMD and NVIDIA multi-GPU configurations could be very near.

    This statement shows a lack of understanding of how video card hardware works. You could never combine the different platforms to render a single image. You'd be able to tell that there were two separate rendering implementations. Think back long ago, to the days when quake 3 was just coming out... ATI had a very lou

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...