Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Graphics Software Upgrades

AMD Banks On Flood of Stream Apps 124

Slatterz writes "Closely integrating GPU and CPU systems was one of the motivations for AMD's $5.4bn acquisition of ATI in 2006. Now AMD is looking to expand its Stream project, which uses graphics chip processing cores to perform computing tasks normally sent to the CPU, a process known as General Purpose computing on Graphics Processing Units (GPGPU). By leveraging thousands of processing cores on a graphics card for general computing calculations, tasks such as scientific simulations or geographic modelling, which are traditionally the realm of supercomputers, can be performed on smaller, more affordable systems. AMD will release a new driver for its Radeon series on 10 December which will extend Stream capabilities to consumer cards." Reader Vigile adds: "While third-party consumer applications from CyberLink and ArcSoft are due in Q1 2009, in early December AMD will release a new Catalyst driver that opens up stream computing on all 4000-series parts and a new Avivo Video Converter application that promises to drastically increase transcoding speeds. AMD also has partnered with Aprius to build 8-GPU stream computing servers to compete with NVIDIA's Tesla brand."
This discussion has been archived. No new comments can be posted.

AMD Banks On Flood of Stream Apps

Comments Filter:
  • by neonleonb ( 723406 ) on Thursday November 13, 2008 @09:27PM (#25755885) Homepage
    Surely I'm not the only one who thinks this'll be useless without open-source drivers, so you can actually make your fancy cluster use these vector-processing units.
  • Open standard API (Score:4, Interesting)

    by Chandon Seldon ( 43083 ) on Thursday November 13, 2008 @10:02PM (#25756255) Homepage

    So... is there an open standard API for this stuff yet that works on hardware from multiple manufacturers?

    If not, developing for this feels like writing assembly code for Itanium or the IBM Cell processor. Sure, it'll give you pretty good performance now, but the chances of the code still being useful in 5 years is basically zero.

  • Re:And someday (Score:3, Interesting)

    by Max Littlemore ( 1001285 ) on Thursday November 13, 2008 @10:54PM (#25756693)
    Hmmm. The old "I don't know everything about everything, therefore I don't care if I don't know everything about something" argument. Gets 'em every time..
  • by Chandon Seldon ( 43083 ) on Thursday November 13, 2008 @11:37PM (#25756959) Homepage

    Like I said about zealots making things up....

    I think you're letting your personal ideology cloud your view of the world around you.

    Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for!

    Most major companies don't. They happily run employee desktops on Microsoft Windows, because they can easily swap them out when they break. They run critical legacy systems on IBM mainframes (or whatever). And they run new critical systems on platforms that are almost entirely FOSS. I'm sure you can easily come up with a counterexample, but they're the exception, not the rule.

  • by 644bd346996 ( 1012333 ) on Friday November 14, 2008 @12:06AM (#25757141)

    What about things like GLSL compilers? Are they in hardware too?

  • by hackerjoe ( 159094 ) on Friday November 14, 2008 @12:52AM (#25757385)

    Wow, that's really fascinating, because the Sweeney article you mention has him going on and on about generally programmable vector processors which make heavy use of that SIMD thing you so hate.

    Oh wait, I didn't mean fascinating, I meant boring and you're an idiot. Engineers don't implement SIMD instructions because vector processors are easy to program, they implement them because they are cheap enough that they're worth having even if you hardly use them, never mind problem domains like graphics where you're expecting to use them all the time.

    (And yes, I did read your article too, just to be charitable. It's amusing that you think the Cell is "a perfect example of how not to design a multicore processor", because the first step of writing software that performs well on the Cell is to break it down into a signal processing chain. What's hilarious, though, is that you think this will make software easier to write. Clearly you haven't tried to write any real software using your proposed system yet, much less worked with a team on it.)

  • by Anonymous Coward on Friday November 14, 2008 @05:26AM (#25758441)

    > Any my point is, why? All you need is a decent API.

    Well, that assumes closed source can provide a decent API.
    Considering the huge amount of bugs even the CUDA compilers have (and they are fairly good compared to others, particularly FPGA synthesis tools) there is a severe risk that you will get stuck in your project without the possibility to do _anything_ about it.
    Closed source also leads to such ridiculousness as the disassembler only being available as a third-party tool (decuda) making it even harder to find the bugs in the tools.
    (But yes, calling it useless is over-the-top.)

That does not compute.

Working...