Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Software Supercomputing Linux

BOINC Now Available For GPU/CUDA 20

GDI Lord writes "BOINC, open-source software for volunteer computing and grid computing, has posted news that GPU computing has arrived! The GPUGRID.net project from the Barcelona Biomedical Research Park uses CUDA-capable NVIDIA chips to create an infrastructure for biomolecular simulations. (Currently available for Linux64; other platforms to follow soon. To participate, follow the instructions on the web site.) I think this is great news, as GPUs have shown amazing potential for parallel computing."
This discussion has been archived. No new comments can be posted.

BOINC Now Available For GPU/CUDA

Comments Filter:
  • by DrYak ( 748999 ) on Sunday July 20, 2008 @04:26AM (#24260327) Homepage

    Does Brook provide access like CUDA does to fast shared memory and registers vs. device memory vs. host memory?

    No. Being multiplatform to begin with, Brook exposes less details of the memory architecture underneath (because it can vary widely between platform - like CPU to GPU -, or not be exposed at all by the platform underneath - like OpenGL)

    But what it has is that data is represented by simple C-like array, and the compiler remaps that to cached fast texture accesses. No weird "tex2D" functions, unlike CUDA - that's something I find weird in an architecture which is supposed to abstract and simplify GPGPU coding, specially when all the other memory types are accessed in CUDA using C pointer math.

    Probably now that ATI's Brook+ is maturing, extra attributes on variable declaration could be introduced to have more influence on the memory organisation on that specific back-end.

    CUDA is nice because it enables very low-level control on how memory is used. But this currently comes at the cost of syntax complexity.
    It's interesting to note that both CUDA and Brook+ use a matrix multiplication as an example of language usage. Brook+ simply explain how to partition the work to keep the data nicely inside the fast cache. CUDA has a significant amount of code lines devoted to moving data between several Hungarian notation-prefixed pointer, which is a little bit more confusing.

    Just to pick a nit, I'm pretty sure that the point of device emulation mode is ease of debugging, not performance.

    But to be debugable, the code must at least be runnable. Sadly, the emulation is so slow, that it can run real-word complex algorithms only on really small sets of data. Which might be corner cases and you might misses bugs that only happen on larger data sets. Also, it always runs single threaded, no matter how many cores are available in the system, which may lead to missing some concurrency problems (code works fine on CPU but breaks on GPU because a sync is missing somewhere)

    It can be used to debug short matrix-operation algorithms, but it's very hard to debug more complex things like sequence analysis (and there are even a couple of teams trying to do parallelised antivirus on the GPU)

    But at this early stage, with things still emerging, using CUDA directly seems to have some advantages.

    There are cases where the low level-ness of CUDA definitely makes sense :
    when developing code for specially on purpose built hardware. Say, the lab you work in has built a machine with a couple GeForces inside for you project (given the price of graphic cards and the performance increase between each generation, it makes sense to just throw in a couple of hundred bucks per graphic card for a specific project when the performance need arises). CUDA makes sense - even if it is ugly in places - because it'll let you squeeze the last possible cycle out of the hardware.

    But for something that will run distributed across a huge number of home configurations like "@home" distributed computing, adding an API which will bring additional architectures and is more abstract makes sense. Going for a single API roughly restrict the code to running on only half of gamers population's machines.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...