Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing

Impressive Benchmarks: Sorting with a GPU 222

An anonymous reader writes "The Graphics research group at the University of North Carolina at Chapel Hill has posted some interesting benchmarks for a sorting implementation which is done entirely on a GPU. There have been efforts on doing general purpose computation on GPUs before (previous Slashdot article). However, most of them had generally utilized the fragment processing pipeline of the GPUs which is slower then the default high speed rendering pipeline. Apparently, the above implementation is done using "simple texture mapping operations" and "cache efficient memory accesses" only. There also seems to an option to download the distribution for non-commercial use, though the requirements seem pretty hefty (a very decent nVidia graphics card and the latest nVidia drivers)."
This discussion has been archived. No new comments can be posted.

Impressive Benchmarks: Sorting with a GPU

Comments Filter:
  • by Anonymous Coward on Wednesday June 29, 2005 @07:25AM (#12940386)
    If not, it might be a good place to stash crypto keys, passwords, etc. (At least until someone writes a utility to dump it and adds it to something like Cain).

    ~~~

  • This isn't exactly a fair test as I see it. As far as I can make out they've put a custom sorting algorithm up against the standard C library qsort. How about some comparisons of this GPUsort against other sorting algorithms run on the CPU?
    • qsort() is a very well-understood algorithm that has been highly optimized.
      Not including it in the benchmarks would have been a sign that some smoke and mirrors were involved. If the MSVC++ library had anything faster, people would be using it.
      • Presumably though the algorithm they used in GPUsort can be made to work on a Pentium IV. Comparing only against qsort looks suspicious to me - they should have compared GPUsort on the CPU as well as with it on the GPU and qsort.
        • by pla ( 258480 ) on Wednesday June 29, 2005 @07:56AM (#12940530) Journal
          Presumably though the algorithm they used in GPUsort can be made to work on a Pentium IV

          Not necessarily...

          Their use of the GPU to sort might very well run something along the lines of assigning Z-coordinates based on the key values, and colors based on a simple index , then asking the GPU to "show" the "pixels" in Z-order, then just read the "real" data of any arbitrary size and type in the order specified by the returned colors/indices. That would perform a sort using the GPU, very very rapidly, but you can't really translate it to run on a CPU - Sure, you could write code to fake it, but at the lowest level, you'd end up using something like a quicksort, rather than dedicated hardware, to emulate the desired behavior.

          Now, admittedly, I don't know that the method under consideration used such an approach. But it appears they at least took the approach of using the GPU for its strong points, rather than trying to force it to act as a general-purpose CPU.


          As for the choice of Quicksort - Most likely, they chose it because just about every C library out there has an implementation of quicksort. And while personally I prefer heapsort (in the worst case, quicksort has Q*O(n^2) behavior, while heapsort always takes only P*O(n log n), But P >> Q), I'll admit that for almost all unstructured input sets, quicksort finishes quite a lot faster than anything else.
          • Actually P~2*Q. Also, there are tweaks to Quicksort which guarantees O(nlogn) behaviour. If you need an efficient but somewhat general purpose quicksort I strongly recommend Jörg Schön's collection of C programs [uni-heidelberg.de] which include a simple include file that creates a very efficient sortinf routine, and coding takes seconds!!
          • in the worst case, quicksort has Q*O(n^2) behaviour

            Quicksort exhibits n-squared behaviour when the data set is sorted but reversed. Most decent quicksort routines do a random shuffle of the data at the start to avoid this issue.

            Choosing a sort for a particular situation is very much a matter of choosing. A shell sort is very fast for small data sets - it's quick to implement and follows C*O(n^1.27), where C is small compared to P or Q in your example. With mostly sorted data sets, the choice to sort al

          • As for the choice of Quicksort - Most likely, they chose it because just about every C library out there has an implementation of quicksort. And while personally I prefer heapsort (in the worst case, quicksort has Q*O(n^2) behavior, while heapsort always takes only P*O(n log n), But P >> Q), I'll admit that for almost all unstructured input sets, quicksort finishes quite a lot faster than anything else.

            I humbly suggest you do not understand quicksort vs. heap sort. Layman's terms: Quicksort is be

            • Lordy. I notice that the page claims quicksort has n^2 complexity when input is ordered. That is only true of naive quicksort implementations. Textbook quicksort implementations from when I was in college (about 10 years ago) already took care of cases where input was ordered.
            • That website seems to imply that you need an additional order n amount of space to do heapsort. However, it can be done in place without too much effort:

              1. Take your input array containing n elements and perform a build maxheap operation on it. This takes O(n) time and can be done in place.

              2. Swap the root (the maximum element in the heap) with the last value in the part of the array still occupied by the heap, effectively removing the root from the heap and placing it in the sorted portion at the end o
        • by Shisha ( 145964 ) on Wednesday June 29, 2005 @08:05AM (#12940570) Homepage

          Presumably though the algorithm they used in GPUsort can be made to work on a Pentium IV.
          Yes, but the algorithm won't be anything special. It won't be a better algorithm than qsort and definitely not more efficient than O(n log n) comparisons. What is special is that it runs on a GPU.

          they should have compared GPUsort on the CPU as well

          And how exactly were they supposed to do that?!? GPUsort has been programmed to run on a GPU, and even if you don't know the first thing about computers, the G should suggest that GPU is very different from a CPU.

          One can prove that no sorting algorithm using binary comparisons can do better than use O(n log n) comparisons. Hence GPUsort couldn't have been asymptotically more efficient that qsort.

          If you think about it, the standart qsort implementation is definitely more optimised than most algorithms out there; it has been around for very long. But none of this matters; GPUsort can't run on a CPU.

          • Chill dude!

            GPUs are of course specialist processors and not the same as CPUs as you state - I am well aware of the one letter difference. However my understanding is that the processing units in GPUs are rather similar to the likes of the SSE units within Pentium's and Altivec units in PowerPC chips. They are essentially linear vector processors, from what I understand.

            We're talking about algorithms here - if you can express an algorithm in code for one type of processor chip then you can express it in
            • The whole point of GPUsort is to exploit the parallelism of the GPU... you certainly could implement it... hell you could implement it on a postscript printer... anything that is Turing-complete. But it won't be a valid comparison. What they have compared is their best-known algorithms for sorting on a GPU to what is probably the best general-purpose sorting algorithm on a CPU.
          • even if you don't know the first thing about computers, the G should suggest that GPU is very different from a CPU.

            Based on that logic, you would say that a VCR and a VTR are very different machines, correct?
      • by Anonymous Coward on Wednesday June 29, 2005 @07:42AM (#12940457)
        I really hope they are not using the C-library implementation of qsort in those timing comparisons...

        It:
        a) Makes a call to a function, via a function pointer, for each comparison.
        b) Uses a variable element size

        Both of these things will slow down the sort a lot, compared to a specialized implementation that only sorts 32-bit integers.
        • The obvious answer is to make a qsort kernel module and get it out of user space!
          • The obvious answer is to make a qsort kernel module and get it out of user space!

            Yeah, that would be really neat. Not only do you still have to call a function for each comparison. But *each* and *every one* of those function calls will need to go across the kernel-userspace boundary. My oh my, you are clever (not!)

        • I don't know why the parent comment is mod to "Funny" because this is actually _totally_ true. http://www.rt.com/man/qsort.3.html [rt.com]
        • If all you want to sort is integers, you can go beyond quicksort. Google "bucket sort" or "radix sort"- these are O(n) sorts while even quicksort is O(n log n).
          • Of course, they require a lot of memory- buckt requires an array of integers of size maxvalue-minvalue, where max and min value are the biggest numbers being sorted. To sort all uint32 will take 2^32 indexes into the array, or at least 16 GB with 32 bit array indixes. Not practicle for random data, although if your data is limited to a range (say 1-1000) its a nice sort method, and fairly simple to write.

      • No, qsort() does an indirect function call in hottest innermost loop. A hand-coded data-specific sort routine will always be faster unless the comparison operation itself is very slow.

        Unless your compiler knows about qsort and can (1) inline it, and (2) inline the comparison function into it, then you'll always have that indirect call in the innermost loop. (I just checked; gcc can't do it. I haven't tried the Intel or MS compilers.)

      • Quicksort is a well understoord algorithm. qsort() in the standard C library is a notoriously poor implementation. std::sort() in the C++ library might be a better choice.
      • qsort() is a very well-understood algorithm that has been highly optimized.

        Bzzt, wrong. qsort() is NOT, I repeat, NOT, the same as quicksort.

        According to unix manpages: "The qsort subroutine sorts a table of data in place. It uses the quicker-sort algorithm.

        Note that "quicker-sort" is not the same as "quicksort". In fact, I doubt anyone really knows what the hell "quicker sort" is, anyway. My guess is that it might be a name just invented to explain away the q, once some standardization committee de

    • by top_down ( 137496 ) on Wednesday June 29, 2005 @07:51AM (#12940501)
      Indeed, qsort is known to be slow. See:

      http://theory.stanford.edu/~amitp/rants/c++-vs-c/ [stanford.edu]

      A comparison with the much faster STL sort should be interesting.

    • "The implementation can handle both 16 and 32-bit floats."

      So it's "hard coded" for a couple types. The standard qsort has you pass a function pointer to a comparison routine so it can sort anything. Standard qsort also sorts a list of pointers to the items - I bet this GPU sort works directly on the data. Implementing a less generic version on the CPU is likely to result in it being faster than the GPU sort.

      The blurb at the end about increasing GPU speed with each generation is crap too. Both CPU and G

      • "Both CPU and GPU performance are now limited by power dissipation issues."

        Somewhat.

        There's a lot more room for growth in the GPU arena then there currently is in the CPU. They're only running at about 500Mhz right now, and they can add more processing pipelines and other enhancements currently found in the CPU, as well as shrinking the manufacturing process that will allow higher Mhz and more processing power.

        The present-day GPU is very advanced, but they're still behind modern CPU's in terms of manufa
      • "Implementing a less generic version on the CPU is likely to result in it being faster than the GPU sort."

        How do you make a meaningful statement concerning the speed of a "less generic" algorithm running on a general-purpose CPU vs. a "generic" algorithm running on a piece of special-purpose hardware (GPU)?

        Methinks your statement carries with it a load of unstated assumptions... care to spell them out for us?
        • "Methinks your statement carries with it a load of unstated assumptions... care to spell them out for us?"

          What I meant is that the GPU version is NOT generic like the qsort() CPU version. By a "less generic" version on the CPU I meant one that is optimized for a specific data type like "float". The standard quicksort does not sort a bunch of numbers, it sorts a list of pointers to things (like structs for example) when you provide it a pointer to a compare function that compares the things.

          One can easi

        • I just read more of their documentation. They sort an array of (key, pointer) pairs, where the pointer is to the rest of a record. Kind of like takeing structures and pulling the key value out into the tuple. This makes their sort more useful than I originally thought, but it is still not the same as qsort even when just using it for floats. The graphs don't specify which GPUsort they used (they do have one that isn't even a tuple version).

          Implementing a similar "tuple sort" on a CPU with the same restric

    • I was working on a sorting algorithm based on curve fitting. It's not a comparison sort, but more like a radix sort, but not one.

      You start with a bunch of numbers, or things that can be given a number. Loop through the numbers, keeping track of High, Low, total, mean, std dev, etc. Use that information to develop an interpolating curve, which tells you for a given value where it ought to end up.

      Put the Low at the front and the High at the back. On the next pass throught the numbers, put the number you
  • Like Judy for GPU? (Score:3, Interesting)

    by torpor ( 458 ) <ibisumNO@SPAMgmail.com> on Wednesday June 29, 2005 @07:30AM (#12940406) Homepage Journal
    I'd love to see Judy-style thinking applied to GPU problems..." [sourceforge.net]

    especially since i use Judy arrays for tons of things on two different architectures, and its just a darn efficient hash library for pretty much all of my needs ..
  • Not accurate (Score:5, Informative)

    by Have Blue ( 616 ) on Wednesday June 29, 2005 @07:31AM (#12940414) Homepage
    the fragment processing pipeline of the GPUs which is slower then the default high speed rendering pipeline

    For the past two generations or so (starting with the Radeon 9800), there has been no such thing as "the default high speed rendering pipeline". The only circuitry present in the chip has been for evaluating shaders, and the fixed-function pipeline has been implemented as "shaders" that the driver runs on the chip automatically.

    At least, I know for a fact this is true of ATI chips, and would not be at all surprised if nVidia is doing something similar.
  • by Anonymous Coward on Wednesday June 29, 2005 @07:34AM (#12940422)
    Apparantly, the above implementaion is done using "simple texture mapping operations" and "cache efficient memory accesses" only.

    Fry: Magic. Got it.
  • by Anonymous Coward on Wednesday June 29, 2005 @07:34AM (#12940423)
    First, congratulations to J. Ruby who pioneered this work and is mentioned throughout the article. I work with him, and on his desk are _already_ four machines with dual GeForce 7800 in SLI mode. Talk about lust factor.

    Ruby's first published work was the SETI@HOME modified client which uses Nvidia (or) ATI GPU for the waveform FFT calculations. I have watched him steadily upgrade his Nvidia GPU up to this wicked 7800 arrangement he is using today.

    Go Jim! You owe me a beer.
  • very nice (Score:4, Interesting)

    by __aahlyu4518 ( 74832 ) on Wednesday June 29, 2005 @07:34AM (#12940424)
    I probably don't know what I'm talking about, but I'm wondering....

    What is the performance if the GPU is busy rendering when you play a game?
    When the GPU is busy doing what it is supposed to do... a program should resort to qsort right?
    • Most GPGPU (general purpose GPU) researchers are envisioning scientific purposes. It really galls many of us in the scientific community that GPUs are so much more powerful than CPUs (if one can efficiently use the parallel processing capabilities of GPUs), and yet mostly we have to let these powerful processors go unused because it is typically very difficult to use GPUs for non-graphical computations (hence the G in GPU, of course).
      • By the way, for those interested in GPGPU research/ideas, there's a pretty nice site here: http://www.gpgpu.org [gpgpu.org]. It has some sample code, slides from conferences presentations, a forum, etc. It's a pretty nice site for information. I was interested in GPGPUs a few months ago and read through the material on that site heavily, but in the end I didn't have the time to try anything cool out because you'll need to learn how to use a language like Cg or steam to program your GPGPU.
      • by Tim C ( 15259 ) on Wednesday June 29, 2005 @08:29AM (#12940732)
        Well, to be fair GPUs are only "so much more powerful than CPUs" if your task is suitable for running on a GPU. If not, then you're better off using the CPU.

        Kind of like how a bulldozer is much more powerful than a hammer, but totally unsuitable to banging a nail in a bit of wood. If you want something torn down or moved about, though...
        • Next time I need to hammer a bunch of nails in parallel, perhaps I'll consider using a bulldozer! :D

          Your point is well made, of course. Nevertheless, with work like this (the GPU sorter) even off-loading a little work to the GPU can allow your CPU to do other work thereby shortening the wall-clock time required to do your computations (in theory - in practice, it can hurt you if you're not careful!). My research area (neural networks) is inherently parallelizable, but I am not yet aware of work to efficie

          • You could run a large simulation as a neural net in distributed Smalltalk, taking advantage of the inherent parallelizability (if such a word exists?) of the actions of a neuron firing when inputs hit certain critical tresholds.

            Basically, it the same structure I would use when doing a terrain simulation using finite state automata. The wider you can spread the computational net, the more computational fish you can catch.

            Its not the object that is so complicated, (a neuron is, uh, stupidly simple [despite
    • Yes, I am sure the researchers are finding faster ways to map the human genome while playing DOOM3 at the same time. It is very important because it is annoying to have your frames per second drop because of some "cure for cancer" crap.
      • LOL... you made your point :-)

        But often things that are constructed for scientific purposes are one day going to be used for general purpose. I was obviously thinking about that...

    • I'm thinking that this type of thing should be good for screensavers and other hacks. I'd like to see Electric Sheep (http://electricsheep.org/ [electricsheep.org])run entirely on the GPU.
  • (Smalltalk bit block transfer) He was always making assmptions about the underlying virtual engine.

    He was always getting it wrong too. I have logged thousands of hours and thousands of miles, from Montreal, Canada to Lisbon, Portugal cleaning up after this yobbo. What a fuckup he was.

    The opinion was shared too. It got to the point to where we could write code that would detect his code and, as soon as we came across it and confirmed it we would remove it and read the original spec to know what to code.

    "G
  • It's been said... (Score:5, Interesting)

    by TobyWong ( 168498 ) on Wednesday June 29, 2005 @07:41AM (#12940449)
    ...that the greatest threat to Intel's market domination in the future is not going to come from AMD but from a company such as Nvidia.

    Take a look at what they are doing with their GPUs [nvidia.com] right now and you can understand why someone would suggest this.

    • Well, for highly specialised tasks, take a look at Analog Devices http://www.analog.com/ [analog.com] or Texas Instruments http://www.ti.com/ [ti.com].

      They have been producing highly sophisticated cores that left a P4 bite the dust in a lot of cases.

      I have worked on test-bed equipment that used a DSP PCI card that produced more test-data than a dual Xeon system could handle. JFYI.

      GPUs like those from nVidia or ATI are still a lot less sophisticated than those DSPs, or hybrid DSP/uCs.

      Still, in a few years FPGAs or CPLDs w
      • They have been producing highly sophisticated cores that left a P4 bite the dust in a lot of cases.

        I may be wrong, but I'm guessing they don't cost a few hundred dollars each, though, like GPUs do. (The economies of scale that GPU manufacturers see thanks to the fact that they make millions of these things are really quite nice for keeping prices down.)

        GPUs like those from nVidia or ATI are still a lot less sophisticated than those DSPs, or hybrid DSP/uCs.

        I think I disagree. How familiar are you wit
        • I may be wrong, but I'm guessing they don't cost a few hundred dollars each, though, like GPUs do Absolutely, they routinely cost from $1 to $100 at most. For instance, take a look at AC motor control hybrid DSP/uC chips by Freescale. Those allow vector control of AC motors by implementing very sophisticated and high-performance algorithms on specialized chip, this kind of stuff was simply impossible just ten years ago.
    • Wasn't that said by... Jen-Hsun Huang, the Nvidia CEO? I think he was interviewed in Wired, and on the cover it said Nvidia would be the "Intel killer". It's no surprise that the Nvidia CEO would say things that inspire such headlines.

      But wasn't that several years ago, just before Intel increased their market share to well over half the market by selling Intel integrated graphics?

  • It is really time... (Score:2, Interesting)

    by ratta ( 760424 )
    that, since graphic hardware is now completely programmable, ATI and Nvidia realese their specs, so that everyone can use the GPU as a specialized vectorial coprocessor. A CPU that cannot be programmed freely just makes no sense, the same should be for GPU.
    • It's not really a vectorial coprocessor, more suited for large scale stream processing. I'm not sure what extra opening of specs would help in this area though, the programming specifications seem to be quite easily available (people know how to write shaders, afterall) and work has been done on using the GPUs as generalised streaming processors http://graphics.stanford.edu/projects/brookgpu/in d ex.html [stanford.edu] being a prime example. Within the hardware limitations (of which there are many) they are freely program
    • The current generations of CPU's are designed to optimise the processing of conditional statements within a pipeline. Since instructions usually take four stages to perform (fetch, read, execute, write) and the location of the current instruction can depend on the condition result (branch on whatever) of the previous instruction. To prevent this from degrading performace, the CPU has to anticipate the effect that both results of a conditional instruction will have on future instructions. Also, there is no
  • This may be slightly offtopic insofar that it doesn't directly deal with the subject (sorting with a GPU) at hand, but I was wondering how these sorts of research projects overcome "floating point number weirdness" I've heard about doing GPU calculations (as per the implementation being non-IEEE)? Doubly, would someone in the know help explain what the aforementioned "weirdness" means?
  • precision? (Score:2, Interesting)

    by Barbarian ( 9467 )
    Are not most GPUs limited to 24 or 32 bit precision? The x86 processors can go to 80 bits.
  • a very decent nVidia graphics card

    So is that like "average", except "very average"?
  • Seriously, like who here hasn't already done this themselves anyway. At least it's not a dupe I suppose !

    • {perk} Dupe?

      Look at the editor for this story - Timothy - the dupe from last night. Both stories on the front page

      On top of that, the story has then instead of than.

      'spose he's on day 6 of a 15-day meth binge?

  • This is an implementation of the Bitonic sort.

    From the article, when comparing their sort to previous GPU sort: "These algorithms also implement Bitonic Sort on the GPU for 16/32-bit floats using the programmable pipeline of GPUs."

    So as I understand it they made a very fast implementation (using the GPU) of an old algorithm suited to parallel processing: bitonic sort was published in 1968 (hey, where were the fast parallel processors in the late sixties ;)
  • Perhaps the time has come for separate GPU and video output cards. The video output card could be fairly dumb, perhaps on the motherboard, and the GPU card optional and usable even in servers without a video card.
  • Why qsort? (Score:2, Informative)

    by ringm000 ( 878375 )
    Comparing with C qsort is strange, qsort will never work fast due to being unable to inline the comparison function. Hand-written qsort of floats should work much faster.
  • I can't wait for someone to port an MP3 encoder to GPGPU. A $300 P3, stuffed with 5 $200 videocards, could host a pool of MP3 encoders, faster than 10 Xeons which would cost $30k. And a lot easier to administer in a streaming farm.
  • a couple of years ago. Nvidia gave him a FX5900 (I think. It was the one that sounded like a hair drier and got pulled from the market) to do his research with. Anyways, check out his papers [usask.ca] on the subject.
  • The point? (Score:3, Interesting)

    by Pedrito ( 94783 ) on Wednesday June 29, 2005 @09:55AM (#12941350)
    I don't really see the point. I mean, yes, it's kind of neat, but that's about it. It serves no practical purpose, really.

    Not every machine has a GPU. I don't know, but I suspect GPUs aren't terribly compatible with each other, so for any sort of market, you'd have to code for multiple GPU types.

    The fact is, co-processors have been around since the early x86 CPUs. Not just math co-processors, but Intel used to cell i860 and i960 co-processor boards (some with multiple CPUs) for just this sort of thing.

    I'd also suspect that if your GPU is being used for sorting or some other calculation intensive operation, it's less useful as a GPU. If you don't need the GPU as a GPU, but you need the computing power, I suspect spending the additional money on more CPU power instead, is going to have a bigger overall payoff since it's going to speed up every operation, not just the ones a GPU can be adapted to.

    So, again, I don't really see the point. Really, if you need specialized co-processing, getting a FPGA board will probably be a much better use of your money since it can be customized for many more tasks than a GPU.
    • The point is that many machines come with a GPU as default hardware, even if they're never going to do 3D graphics work. If you can make use of something that's *already* on your machine, it's a net gain.

      Also, FPGA boards are not necessarily cheap, while GPUs can be obtained for reasonable amounts of money and have very high performance if your problem can be coded to run on them.
    • as far as generally available co-processors, GPUs are by far the most common. therefore you get general availability on a number of machines, and economies of scale to lower the price somewhat.

      Though this particular implementation (sorting) probably won't revolutionize the world, it can be seen as a step, a way of learning how to use the GPU in a more general fashion. Maybe we should see this more as "Hello World" in a new programming language on a new system rather than an end in itself.
  • "Our results on the ATI X800 XT and NVIDIA GeForce 6800 GPUs indicate better performance in comparison to the qsort routine on a 3.4 GHz Pentium IV PC....Note that in both cases, GPUSort performs considerably faster."

    suddenly paying $250 for a video card doesnt seem like such a bad deal...

    Looks like we have a new videocard benchmark too.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...