Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Supercomputing Linux

BOINC Now Available For GPU/CUDA 20

GDI Lord writes "BOINC, open-source software for volunteer computing and grid computing, has posted news that GPU computing has arrived! The GPUGRID.net project from the Barcelona Biomedical Research Park uses CUDA-capable NVIDIA chips to create an infrastructure for biomolecular simulations. (Currently available for Linux64; other platforms to follow soon. To participate, follow the instructions on the web site.) I think this is great news, as GPUs have shown amazing potential for parallel computing."
This discussion has been archived. No new comments can be posted.

BOINC Now Available For GPU/CUDA

Comments Filter:
  • It's thinking... (Score:5, Interesting)

    by neomunk ( 913773 ) on Saturday July 19, 2008 @12:13PM (#24254539)

    As someone who is interested in software neural nets, this announcement practically gives me a chubber.

    And let me be the first to welcome our new Distributed Overlord. The lack of an 's' on "Overlord" is the exciting part of this article.
     

  • by da5idnetlimit.com ( 410908 ) on Saturday July 19, 2008 @12:17PM (#24254573) Journal

    Video conversion for GPU/CUDA (an amd64 version for ubuntu heron, if I get to be really choosy)

    saw something about this, and they were getting unbelievable transcoding speeds...

    • by lavid ( 1020121 )
      the way that CUDA deals with thread death in the current iterations is lacking. if they make that more graceful, you can really expect to see some insane speedups.
  • Single platform only (Score:4, Interesting)

    by DrYak ( 748999 ) on Saturday July 19, 2008 @12:54PM (#24254929) Homepage

    The only sad thing is that CUDA is a single platform API that only supports a handful of cards from a single constructor. For a project that tries to get as many computers working together as possible like BOINC, it would be also good if they tried to support at least one more API.

    Brook could have been also a nice candidate. It has already been used by other distributed computing project (Folding@home), it supports multiple back-end (including a multi-CPU one which actually works(*), an OpenGL which works with most hardware, and AMD/ATI's CAL backend featured in their Brook+ fork)

    Too bad that currently both nVidia and Intel are trying to attract customers to proprietary single platform APIs (CUDA and Ct resp.)
    Specially given some memory management weirdness in CUDA.

    (*) : unlike CUDA's device emulation mode which is just a ridiculous joke performance-wise.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      CUDA is being ported to ATI/AMD cards with nVidia's blessing and support. By next year there will probably be a lot of hardware support for the API.

    • Re: (Score:3, Informative)

      by Satis ( 769614 )

      fyi, as the other reply states, CUDA isn't limited to a single manufacturer. nVidia has made it available for other graphics card manufacturers to support. Here's an article on Extremetech talking a bit about it, but at least according to the article ATI doesn't appear interested.

      http://www.extremetech.com/article2/0,2845,2324555,00.asp [extremetech.com]

      • Yes, but sorry, CUDA is as much oriented toward other graphic manufacturers as Microsoft's ISO Office XML with all its "use_spacing_as_in_word_96='ture' " options is an open standard.

        It very heavily oriented toward nVidia's architecture. It has several deeply asinine architecture quirks. (you see, you have several different type of memory architecture. The twist is that 3 of them are accessed using regular pointer arithmetic, but textures are accessed using dedicated specific functions. because using "[]" o

        • by krilli ( 303497 )

          CUDA is free and it works. I prefer a hackish CUDA now to a nice, abstract CUDA in two years.

          Also, I do believe someone will write a nice abstraction on top of CUDA. If CUDA is like C++, there will be nice Boost and Qt toolkits for it.

          Also, you can asynchronous memory transfers and kernel executions ... unless you're talking about something else and it's my misunderstanding.

    • Re: (Score:3, Informative)

      by mikael ( 484 )

      There are many parallel processing and networking API's and out there - both past and present - OpenMP, pthreads, CUDA, sockets, etc...

      There is a proposal by Apple to create a common API for parallel processing (OpenCL) which would be cross-platform compatible. The Guardian has an article [guardian.co.uk] on this topic.

    • Brook could have been also a nice candidate. It has already been used by other distributed computing project (Folding@home), it supports multiple back-end (including a multi-CPU one which actually works(*), an OpenGL which works with most hardware, and AMD/ATI's CAL backend featured in their Brook+ fork)

      Does Brook provide access like CUDA does to fast shared memory and registers vs. device memory vs. host memory?

      (*) : unlike CUDA's device emulation mode which is just a ridiculous joke performance-wise.

      Just

      • Does Brook provide access like CUDA does to fast shared memory and registers vs. device memory vs. host memory?

        No. Being multiplatform to begin with, Brook exposes less details of the memory architecture underneath (because it can vary widely between platform - like CPU to GPU -, or not be exposed at all by the platform underneath - like OpenGL)

        But what it has is that data is represented by simple C-like array, and the compiler remaps that to cached fast texture accesses. No weird "tex2D" functions, unlike CUDA - that's something I find weird in an architecture which is supposed to abstract and simplify GPGPU coding

        • by krilli ( 303497 )

          Why don't you get cracking then and write a nice Brook BOINC?

          • by DrYak ( 748999 )

            Why don't you get cracking then and write a nice Brook BOINC?

            I actually *do* happen to write parallel applications using Brook for bioinformatics processing.
            It just happens that the current application I'm paid for developing doesn't use BOINC. Otherwise I would happily contribute.

        • But for something that will run distributed across a huge number of home configurations like "@home" distributed computing, adding an API which will bring additional architectures and is more abstract makes sense. Going for a single API roughly restrict the code to running on only half of gamers population's machines.

          If something like Brook could come *near enough* to generating optimal code for both NVIDIA and ATI cards, I'd agree with you whole-heartedly. I strongly suspect that this isn't the case.

          Imagi

          • Yes, indeed, F@H sports quite an original zoo of various computation engines in order to squeeze as much performance as possible from as many clients as possible. Including a client running on PS3's Cell.

            I agree that BOINC should include support for more than 1 single API. Either adding CAL as you suggest (although it's rather low level stuff) or adding Brook (which has a CAL backend - I would think that would be better as it is much higher level).

            And you presume correct, currently Brook only supports nVidi

    • by krilli ( 303497 )

      CUDA is really easy to use. So easy to use that BOINC+CUDA got off the ground.

      I don't see any cards other than NVIDIA's that are as effective, given cost, effectiveness and ease of programming.

      "A handful of cards from a single constructor"? You can also say "Cheap, powerful cards available anywhere".

      CUDA device emulation is only intended as a partial debugging tool.

  • - Implement CUDA in Gallium, so all Gallium-capable HW can run CUDA

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...