Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Technology

Fastest Graphics Ever, Asus ARES Rips Benchmarks 208

MojoKid writes "Over-the-top, killer graphics cards are always fun to play with, though they may not be all that practical. With a pair of ATI Radeon HD 5870 GPUs on a single PCB and 4GB of GDDR5 graphics memory on board, the recently released Asus ARES is one such card that can currently claim the title of being the fastest single gaming graphics card on the planet. This dual-GPU-infused beast rips through benchmarks, besting even the likes of a Radeon HD 5970 or NVIDIA GeForce GTX 480. You can even run a pair of them in CrossFire mode, if you're hell-bent on the fastest frame rates money can buy currently."
This discussion has been archived. No new comments can be posted.

Fastest Graphics Ever, Asus ARES Rips Benchmarks

Comments Filter:
  • 5890 Ultra (Score:3, Informative)

    by Anonymous Coward on Sunday July 11, 2010 @04:41AM (#32865148)

    So it's actually a ATI Radeon 5890 Ultra. You will be cheaper off buying two discrete 5870 cards and running them in Crossfire. Thermals will be better and thus you will be able to overclock them further.

  • by BikeHelmet ( 1437881 ) on Sunday July 11, 2010 @05:28AM (#32865292) Journal

    I only spend ~$100 on average on my videocards.

    I got a GTS 250 for $100 close to a half-year ago. A friend of mine just got a Radeon 4870 for $100!

  • Re:Linux? (Score:3, Informative)

    by V!NCENT ( 1105021 ) on Sunday July 11, 2010 @05:43AM (#32865336)
  • Re:GPGPU? (Score:1, Informative)

    by Anonymous Coward on Sunday July 11, 2010 @05:45AM (#32865342)

    Well, even the blurb says it's two 5870s, so I guess as fast as two 5870s?

    It's a new card, not a new architecture. There's not going to be a significant difference in performance from other cards built on the same GPUs modulo clocks.

    But you knew that.

  • by V!NCENT ( 1105021 ) on Sunday July 11, 2010 @05:56AM (#32865380)

    Crysis Warhead at 1680*1050 at max setting 'enthusiast' or something gives 30+ fps on Windows XP SP2 with my AMD Phenom 9950 X4, 8GB RAM, HD5770...

    So you were saying?

  • by mangu ( 126918 ) on Sunday July 11, 2010 @06:18AM (#32865428)

    30fps is a joke and not anywhere near a playable framerate

    It is perfectly playable, for anyone with human eyes [wikipedia.org]

  • by dnaumov ( 453672 ) on Sunday July 11, 2010 @06:29AM (#32865442)

    30fps is a joke and not anywhere near a playable framerate

    It is perfectly playable, for anyone with human eyes [wikipedia.org]

    I can't believe anyone still tries to bring up the old "human eye doesn't see beyond 30fps so anything higher is useless" mantra. It has been debunked a hundred times.

    http://www.boallen.com/fps-compare.html [boallen.com]
    http://kimpix.net/download/60vs24.avi [kimpix.net]

  • by asdf7890 ( 1518587 ) on Sunday July 11, 2010 @07:03AM (#32865516)

    1) 30fps is a joke and not anywhere near a playable framerate

    FPS is one of those subjective issues where there seems to be a lot more "I don't like X so you are daft for suggesting someone might" then hard facts.

    For lot of people 30fps is perfectly fine if it is a minimum rate rather than an average. A lot of people talk at cross purposes on this one, the "30 is fine" crowd assuming that the people looking for 100fps+ when there monitor probably refreshes at 60Hz are daft and want 100+fps everywhere and the "30 is no were near enough" crowd thinking that the 30fpss would be happy with 30 on average. For games that require decent graphics hardware the demand on that hardware can vary a lot, so a card that gets 30pfs in some areas will drop below 15fps in others, likewise that card that pushes 100Hz in the lighter scenes may drop below 50 on the really heavy ones.

    So any quote of an fps requirement or recommendation is completely useless unless you qualify the figure in more detail.

    Another factor that needs to be considered is screen size. An object moving from one side of the screen to the other at the same framerate is going to look smoother on an smaller monitor than it'll look on a full-wall projector (unless of course you are far away from said wall, to the point where it is effectively the same size as the small monitor in terms of how it appear on the back of your eye). How far objects on the display travel between frames is what needs to be measured, not just how many frames there are in a given time. This brings up another point as to why this sort of thing is subjective and difficult to sound reasonable discussing (without so much supporting detail that you bore people to death) - it very much depends on what games you play and how you play them.

  • Re:Linux? (Score:4, Informative)

    by V!NCENT ( 1105021 ) on Sunday July 11, 2010 @07:05AM (#32865524)

    Oh yeah sorry... I forgot to mention that that engine is actually used to make this commercial game that is comming to Linux: http://www.primalcarnage.com/website/ [primalcarnage.com]

  • One page (Score:2, Informative)

    by cffrost ( 885375 ) on Sunday July 11, 2010 @07:06AM (#32865528) Homepage
  • by jones_supa ( 887896 ) on Sunday July 11, 2010 @07:15AM (#32865548)

    As for adventure games, the golden age of 1990s is gone. There were EGA or VGA games like Space Quest and Monkey Island that were so fun to play and have no modern successors.

    It's not that sad. There's still gems here [telltalegames.com] and there [machinarium.net].

  • Re:OpenCL? (Score:1, Informative)

    by Anonymous Coward on Sunday July 11, 2010 @07:20AM (#32865552)

    Considering that AMD doesn't even bundle an OpenCL runtime with their drivers yet, I think it goes without saying that AMD is still behind the curve on OpenCL.

    However performance often comes down to how an application was coded: applications can often be much more efficient for AMD's GPUs if you write them to pack instructions in a manner that better fits their VLIW architecture. As a result NVIDIA does better on average, while AMD can rip apart embarrassingly parallel tasks such as key crunching. Unfortunately for AMD, VLIW is much more dependent on the compiler and programmer than NV's scalar architecture is.

    In any case, much of this is moot with the Ares. In spite of the fact that GPUs are super stream/parallel processors, most consumerish GPGPU applications can't scale to multiple GPUs as they require high-speed (on-chip) communication to work.

  • by Anonymous Coward on Sunday July 11, 2010 @09:51AM (#32866128)

    Your eyes don't see at a specific frequency as if you have a rotating shutter installed in each eye that discretely breaks up information for your brain. In fact the conditions in which flicker will occur are strongly dependent on the brightness in the room and the brightness of the display along with the frame rate. If you watch a visually dark movie on an old CRT in a dark room, it is very unlikely you will see flicker. This is because the flicker on your CRT is your ability to see darkening of the phosphors between frames. If you have problems with old CRTs, try turning down the brightness or darkening the room.

  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Sunday July 11, 2010 @10:30AM (#32866406)
    Comment removed based on user account deletion
  • by KingMotley ( 944240 ) * on Sunday July 11, 2010 @10:43AM (#32866464) Journal

    First 30 FPS is probably too low for a FPS because you are likely stating what the FPS is when you are motionless and in one particular spot. Now, replay a few matches and tell us what your minimum frame rate was, and I best it's in the very low teens or worse. That isn't acceptable, and you are more likely to lag when the action gets thick and you need your FPS the most.

    Secondly, Crysis/Crysis Warhead is a 3 year old engine that's a generation behind. Of course playing games from 3 years ago play fine on $100 video cards, but those cards would have been the $600+ cards 3 years ago too. Try picking at least a current gen game.

    And lastly, 1680x1050? My LCD's native resolution has been 1920x1200 since I got it 5 years ago. Try getting a decent monitor.

    Playing old games on low resolution monitors and cherry picking frame rates only proves how wrong you are.

  • Re:OpenCL? (Score:1, Informative)

    by Anonymous Coward on Sunday July 11, 2010 @03:17PM (#32868334)

    There's a refresh limitation on monitors.

    Yeah. Mine is 150hz, and it's really hard to hit that in a modern game on any card, and probably will continue to be.
    Not everyone uses lcds, and even those that do.. 120hz lcds are still around the corner thanks to 3d tech pushing the need.
    And even at a 60hz lcd, the resolution keeps going up. We're already above 1080p being easily available.

    Since the PC gaming market's been basically dead for 5 year

    No? Bad Company 2 came out last year and is demanding as hell on resources. MoH is in beta and I'm hearing its even more demanding.
    Natural Selection 2's alpha runs at about 5 fps on my computer, admittedly it hasnt been optimized, but it will definitely push GPU hardware especially if they dont back down on their all dynamic lights policy.
    Thats just off the top of my head. There are plenty more coming out I'm sure.

    And if you're looking for maxed frames on World of Warcraft, or any other MMO, chances are you can achieve it with cards from 4 years ago still.

    As someone whos been playing WoW:Cataclysm since alpha I have to say you might want to rethink that. They added DirectX11 water support and the spell detail is only getting more complex. Running WoW at solid framerates is downright hard. Admittedly I'd rather have a new cpu than a new gpu for wow, since so much of its lag is in event processing rather than rendering, but it's still not an easy to max out game.

    And if you think the solution to any of this is "lower your settings", you don't understand PC gaming or the need for this card. PC gaming is about evolution and progession. If you bought this card, you can suddenly run all those games and engines that before you had to turn your settings down for, but now you can max up post-processing options like AF/AA and make it even better.

  • Re:IO limited? (Score:3, Informative)

    by kc8apf ( 89233 ) <<ten.fpa8ck> <ta> <fpa8ck>> on Sunday July 11, 2010 @03:19PM (#32868346) Homepage

    Typically for graphics cards, the only data sent over PCIe is texture data, vertex lists, and commands. The bulk of the operations done by the card are running the commands over the vertex lists while bringing in texture data. The commands are almost always a multi-pass or pipeline so each vertex will be used in computations more than once. The result is the pushed to the monitor, not the PCIe. So, yes, in general, a graphics card will have more FLOPs than I/O bandwidth.

  • Re:OpenCL? (Score:2, Informative)

    by masshuu ( 1260516 ) on Sunday July 11, 2010 @03:39PM (#32868458)

    You know the fancy physics, like a plywood board exploding into 400 pieces and sending each pieces of shrapnel in every direction, or maybe a house that breaks into a million peaces when it collapses? Good luck running those calculations on your CPU, WHILE keeping your frame rate up
    openCL lets you offload work to the GPU.
    Given physics is all i can think of, its like nVidia's CUDA. CUDA is limited to nvidia cards, but openCL is designed to let you write one piece of code, and run it on the CPU, or any supported GPUs

"Engineering without management is art." -- Jeff Johnson

Working...