Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Upgrades Hardware

New Multi-GPU Technology With No Strings Attached 179

Vigile writes "Multi-GPU technology from both NVIDIA and ATI has long been dependent on many factors including specific motherboard chipsets and forcing gamers to buy similar GPUs within a single generation. A new company called Lucid Logix is showing off a product that could potentially allow vastly different GPUs to work in tandem while still promising near-linear scaling on up to four chips. The HYDRA Engine is dedicated silicon that dissects DirectX and OpenGL calls and modifies them directly to be distributed among the available graphics processors. That means the aging GeForce 6800 GT card in your closet might be useful once again and the future of one motherboard supporting both AMD and NVIDIA multi-GPU configurations could be very near."
This discussion has been archived. No new comments can be posted.

New Multi-GPU Technology With No Strings Attached

Comments Filter:
  • Interesting (Score:5, Informative)

    by dreamchaser ( 49529 ) on Tuesday August 19, 2008 @08:12PM (#24666649) Homepage Journal

    I gave TFA a quick perusal and it looks like some sort of profiling is done. I was about to ask about how it handled load balancing when using GPU's of disparate power, but perhaps that has something to do with it. It may even run some type of micro-benchmarks to determine which card has more power and then distribute the load accordingly.

    I'll reserve judgement until I see reviews of it really working. From TFA it looks like it has some interesting potential capabilities, especially for multi-monitor use.

  • No (Score:5, Informative)

    by x2A ( 858210 ) on Tuesday August 19, 2008 @08:15PM (#24666675)

    ATI were bought out by AMD, so future ATI GPUs will be released by AMD.

  • by RyanFenton ( 230700 ) on Tuesday August 19, 2008 @08:38PM (#24666927)

    Power supply units only supply so much energy, and before then cause interesting system instability.

    Also, given the increasingly growing cost of energy, it might be worth buying a newer generation card just for the sake of saving the energy that would be used by multiple older generations of graphics cards. Not the newer cards use less energy in general - but multiple older cards being used to approximate a newer card would use more energy.

    I guess power supplies are still the underlying limit.

    As an additional aside, I'm still kind of surprised that there hasn't been any lego-style system component designs. Need more power supply? Add another lego that has a power input. Need another graphics card? Add another GPU lego. I imagine the same challenges that went into making this hydra GPU thing would be faced in making a generalist system layout like that.

    Ryan Fenton

  • Re:Latency. (Score:5, Informative)

    by Mprx ( 82435 ) on Tuesday August 19, 2008 @08:43PM (#24666975)

    I agree this is a common problem in modern games; see http://www.gamasutra.com/view/feature/1942/programming_responsiveness.php [gamasutra.com]

    Don't confuse control latency with reaction time. Reaction time will be at least 150ms for even the best players, but humans can notice time delays much smaller than best reaction time. A good rhythm game player can hit frame exact timing at 60fps -- a 17ms time window. With low enough latency the game character feels like a part of your own body, rather than something you are indirectly influencing.

    The same thing applies to GUIs, and only a very short delay will destroy that feeling of transparency of action. I never actually used BeOS myself, but I read that it was designed with low interface latency as a priority, which was why it got such good reviews for user experience.

  • Re:Latency. (Score:5, Informative)

    by Colonel Korn ( 1258968 ) on Tuesday August 19, 2008 @08:44PM (#24666979)

    Latency will be a problem. All that extra message passing and emulation layers.

    Already, most Windows 3d games lead me feeling a little disconnected compared to DOS games.
    The sound effects and graphics always lag behind the input a little.

    Try playing doom in DOS with a soundblaster, then try a modern windows game. With doom you hear and see the gun go off when you hit the fire button. In a modern 3d game, you don't.

    I've experienced the same thing over a number of different computers.

    Most monitors have about a 30-50 ms input lag, meaning the image is always a frame or two behind in most modern games. You can get a 0-5 ms input lag monitor, though. The DS-263n is a good example. I felt like everything was lagged ever since I switched to LCDs, but once I picked up the 263, that feeling is gone. The feeling of sound lagging input could be a different issue or it could be psychological.

  • Re:Interesting (Score:5, Informative)

    by x2A ( 858210 ) on Tuesday August 19, 2008 @09:00PM (#24667093)

    "It seems to be using feedback from the rendering itself"

    Yep it does look like it's worked out dynamically; the article states that you can start watching a movie on another monitor while scene rendering on another, and it will compensate by sending fewer tasks to the busy card. Simplest way I'd assume to do this would be to keep feeding tasks into each cards pipeline until the scene is rendered. If one completes tasks quicker than the other, it will get more tasks fed in. I guess you'd either need to load the textures into all cards, or the rendering of sections of the scene could have to be decided in part by which card as textures it needs already in its texture memory.

    I guess we're not gonna know a huge amount as these are areas they're understandably keeping close to their chests.

  • Re:Latency. (Score:3, Informative)

    by Tanman ( 90298 ) * on Tuesday August 19, 2008 @09:02PM (#24667111)

    This is most likely due to a feature on newer cards and drivers whereby the video card actually renders ahead of time. They have algorithms to predict what the next 3-5 frames will be and the GPU uses extra cycles to go ahead and render those. That way, if something happens like there turns out to be a high-geometry object that pops out and gets loaded into memory, the video card has a buffer before the user notices a drop in their framerate.

    The drawback? You get control lag. Newer drivers let you adjust this or even set it to 0 -- but you will notice that your overall FPS will decrease if you set it to 0 because the card cannot optimize ahead-of-time that way.

  • by Michael Hunt ( 585391 ) on Tuesday August 19, 2008 @09:32PM (#24667329) Homepage

    > How did anyone get "vastly different GPUs" from this?

    Presumably because (for e.g.) a G70-based 7800 and a G92-based 8800GT are vastly different GPUs.

    G70, for example, had two sets of fixed-purpose pipeline units (one of which ran your vertex programs, and one of which ran your fragment programs,) a bunch of fixed-function logic, some rasterisation logic, etc.

    On the other hand, G80 and above have general purpose 'shader processors' any of which can run any pipeline programs (and, afaik, runs the traditional graphics fixed-function pipeline in software on said SPs), and a minimal amount of glue to hang it together.

    About the only thing that current-generation GPUs and previous-generation GPUs have in common is the logo on the box (this applies equally to AMD parts, although the X19xx AMD cards, i'm told, are more similar to a G80-style architecture than a G70-style architecture, which is how the F@H folks managed to get it running on the X19xx before G80 was released.)

  • by LarsG ( 31008 ) on Tuesday August 19, 2008 @09:41PM (#24667371) Journal

    From the screen shots and the description, it sounds like this thing takes the d3d (or ogl) instruction stream, finds tasks that can be done in parallel and partition them up across several cards. Then it sends each stream to a card, using the regular d3d/ogl driver for the card. At the back end it merges the resulting framebuffers.

    What I'd like to know is how they intend to handle a situation where the gpus have different capabilities. If you have a dx9 and a dx10 card, will it fall back to the lowest common denominator?

    Also, what about cards that produce different results? Say, two cards that does anti-aliasing slightly different. The article says that Hydra will often change the work passed off to each card (or even the strategy for dividing work amongst the cards) on a frame by frame basis. If they produce different results you'd end up with flicker and strange artefacts.

    Sounds like interesting technology but unless they get all those edge cases right...

  • Re:My god... (Score:4, Informative)

    by Bureaucromancer ( 1303477 ) on Tuesday August 19, 2008 @10:23PM (#24667719)
    Presumably he's North American, we (among a few other places) use 120v lines, 240 is reserved for special high power circuits, generally only used for dryers, stoves and refrigerators (and only some of the first two).
  • Re:Latency. (Score:4, Informative)

    by MaineCoon ( 12585 ) on Tuesday August 19, 2008 @10:47PM (#24667949) Homepage

    Yes, CRTs have something like 1-2 ms latency + refresh rate.

  • Re:quick (Score:3, Informative)

    by x2A ( 858210 ) on Tuesday August 19, 2008 @11:46PM (#24668413)

    A quick glance does look like this is the case. It's like having a load of extra ALUs which can speed up number crunching in apps where the same or similar actions need to be performed against a series of values, such as working with matrices, FFTs, signal encoding/decoding. But GP computing also needs flow control; conditional branching, which still needs the main CPU. (Memory management also, but GPUs do have at least basic memory management as they have increasingly large chunks of memory for caching textures etc). Setting up the GPU to do stuff for ya takes overhead, which pays off if you're using it enough, so yeah being able to write functions that take advantage of it from languages like java could be beneficial, but you couldn't port the actual java vm over to the GPUs.

    (but still with the disclamer "from a quick glance" - deeper inspection may prove otherwise)

  • Re:My god... (Score:1, Informative)

    by Anonymous Coward on Tuesday August 19, 2008 @11:46PM (#24668415)

    Both nVidia and ATI are working on "hybrid" tech so that you can use a low-power integrated graphics chip for normal desktop/laptop use, and only use your graphics card(s) when you're playing games. Unfortunately, the nVidia version only works for nForce chipsets, and the ATI version only works with a specific AMD chipset series. You're probably out of luck if you did't build your system with the right motherboard.

  • Re:Latency. (Score:2, Informative)

    by MR.Mic ( 937158 ) on Wednesday August 20, 2008 @08:02AM (#24671291)
    I agree with parent 100% on the DS-263N.
    I was using exclusively CRTs up until July of this year, when my last good (professional quality) one died. I was reluctant to switch away, because most LCDs I have seen looked like crap. I was proven wrong with the DS-263N.

    I am a visual effects and game dev artist, so color accuracy and display uniformity is a must for me. I also do a lot of gaming, so input lag and scaling options (for 4:3 only games) is also a big factor.

    Before I purchased, I did a lot of reading and searching for the right monitor. I stumbled upon the LCD thread [anandtech.com] at the AnandTech forums. It had everything I needed to know about choosing the right monitor. In the "Displays du Jour" section of the post, the DS-263N was listed as being an excellent 8-bit IPS TFT with less than a frame of input lag. I read a numerous consumer reviews on the monitor, and they were also consistently favorable.

    I finally made the jump to the DS-263N, and I was not disappointed.
    It's BRIGHT, much brighter than my CRT.
    The viewing angle is amazing, and the gamma does not change when you shift your view. It only dims just a bit at extreme angles, but the dimming is uniform.
    The monitor has 1:1, aspect, and stretch scaling options. So I can happily Quake away at 1600x1200 with no pixel interpolation and only bars to the each side.

    The only two issues I have with it are:
    1) A single dead green subpixel in the corner (However, I rarely notice it. It's almost impossible to see when gaming or watching a movie)
    2) You can adjust the side bar color for viewing non-native resolutions, but it's impossible to get it completely black.

    Aside from those issues, it is one of the best monitors I have ever used.

    Unfortunately, they were discontinued from production earlier this year and the DS-265W is supposed to take its place.
    Hopefully, it will be just as good or better than the DS-263N when it's released.
  • Re:Latency. (Score:2, Informative)

    by eric-x ( 1348097 ) on Wednesday August 20, 2008 @10:42AM (#24673603)

    I very much doubt that GPU's predict and render ahead.

    I think you mean that a number of frames are rendered into a FIFO queue, a delay-line. This is ofcourse rendering behind, not ahead. Often three buffers are used (on-screen, next-screen, rendering) because it reduces the time waiting for swapping the front and back buffer.

    Having more than 3 buffers smooths the fps but increases lag and does not further reduce cpu idle time.

  • Re:GeForce 6800 GT (Score:3, Informative)

    by Pulzar ( 81031 ) on Thursday August 21, 2008 @01:57PM (#24692469)

    Because modern video cards [amd.com] have more and more support for hardware encoding and decoding of video. While the support for encoding is only starting to show up, the hardware decoding makes a big difference in previewing HD video in real time with little CPU usage.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...