BOINC Now Available For GPU/CUDA 20
GDI Lord writes "BOINC, open-source software for volunteer computing and grid computing, has posted news that GPU computing has arrived! The GPUGRID.net project from the Barcelona Biomedical Research Park uses CUDA-capable NVIDIA chips to create an infrastructure for biomolecular simulations. (Currently available for Linux64; other platforms to follow soon. To participate, follow the instructions on the web site.) I think this is great news, as GPUs have shown amazing potential for parallel computing."
Re:Single platform only (Score:2, Informative)
CUDA is being ported to ATI/AMD cards with nVidia's blessing and support. By next year there will probably be a lot of hardware support for the API.
Re:Single platform only (Score:3, Informative)
fyi, as the other reply states, CUDA isn't limited to a single manufacturer. nVidia has made it available for other graphics card manufacturers to support. Here's an article on Extremetech talking a bit about it, but at least according to the article ATI doesn't appear interested.
http://www.extremetech.com/article2/0,2845,2324555,00.asp [extremetech.com]
Re:Single platform only (Score:3, Informative)
There are many parallel processing and networking API's and out there - both past and present - OpenMP, pthreads, CUDA, sockets, etc...
There is a proposal by Apple to create a common API for parallel processing (OpenCL) which would be cross-platform compatible. The Guardian has an article [guardian.co.uk] on this topic.
CUDA is extremely nVidia oriented. (Score:3, Informative)
Yes, but sorry, CUDA is as much oriented toward other graphic manufacturers as Microsoft's ISO Office XML with all its "use_spacing_as_in_word_96='ture' " options is an open standard.
It very heavily oriented toward nVidia's architecture. It has several deeply asinine architecture quirks. (you see, you have several different type of memory architecture. The twist is that 3 of them are accessed using regular pointer arithmetic, but textures are accessed using dedicated specific functions. because using "[]" operator like all other memory type wo uld have been too much straight forward).
Also instead of being just able to declare stream buffers and bind them to some data with a language extensions (as in Brook for exemple) you have to go through a couple of specific function calls into the CUDA API. It's all over 1980's-style C language again.
This whole thing being very much directed toward an architecture like nVidia which can't apply a kernel on the fly while loading memory from the main memory to the GFX cards, but instead relies on concurrent kernels and loads.
And don't ask me about this all weird tendency to require the user to go through some function calls just to set a constant to its default value (instead of simply declaring and accessing it directly).
CUDA provides a nice C-like language for kernels. But the host code it self looks like a direct dump of the driver's interface.
It's definitely something that won't be easily used by 3rd party developer and map nicely to other architectures.
That's why ATI isn't interested. Because most of the host API is designed in a way which is very nVidia oriented and won't necessarily map nicely to other architectures.
FYI, i've been both working on several projects using CUDA and using Brook. Although I appreciate the speed gain of CUDA, and I appreciate having several C-dialects which could get a port of an algorithm between C, CUDA and Brook without too much efforts ; I still find that Brook has a nicer and much more abstract architecture