Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Software Upgrades

AMD Banks On Flood of Stream Apps 124

Slatterz writes "Closely integrating GPU and CPU systems was one of the motivations for AMD's $5.4bn acquisition of ATI in 2006. Now AMD is looking to expand its Stream project, which uses graphics chip processing cores to perform computing tasks normally sent to the CPU, a process known as General Purpose computing on Graphics Processing Units (GPGPU). By leveraging thousands of processing cores on a graphics card for general computing calculations, tasks such as scientific simulations or geographic modelling, which are traditionally the realm of supercomputers, can be performed on smaller, more affordable systems. AMD will release a new driver for its Radeon series on 10 December which will extend Stream capabilities to consumer cards." Reader Vigile adds: "While third-party consumer applications from CyberLink and ArcSoft are due in Q1 2009, in early December AMD will release a new Catalyst driver that opens up stream computing on all 4000-series parts and a new Avivo Video Converter application that promises to drastically increase transcoding speeds. AMD also has partnered with Aprius to build 8-GPU stream computing servers to compete with NVIDIA's Tesla brand."
This discussion has been archived. No new comments can be posted.

AMD Banks On Flood of Stream Apps

Comments Filter:
  • by neonleonb ( 723406 ) on Thursday November 13, 2008 @09:27PM (#25755885) Homepage
    Surely I'm not the only one who thinks this'll be useless without open-source drivers, so you can actually make your fancy cluster use these vector-processing units.
    • Re: (Score:1, Insightful)

      by ceoyoyo ( 59147 )

      Uh, what? Just like your video card is useless for displaying graphics without open source drivers?

      • Re: (Score:3, Informative)

        Uh, what? Just like your video card is useless for displaying graphics without open source drivers?

        We're not talking about video games here. Some people use computers for important work, not just for screwing around.

        • by Cassius Corodes ( 1084513 ) on Thursday November 13, 2008 @09:59PM (#25756209)

          We're not talking about video games here. Some people use computers for important work, not just for screwing around.

          How dare ye!

        • by ceoyoyo ( 59147 ) on Thursday November 13, 2008 @10:15PM (#25756357)

          Yes, thank you for telling me. I use mine for cancer research. That includes GPGPU, by the way. Yes, I'm serious.

          I don't believe I know anyone who uses the source code for their video driver. All the GPGPU people use one of the GPU programming languages. The hard core ones use assembly. The young 'uns will grow up with CUDA. None of those requires the source code for the driver.

          • i think the point is that if you want to use GPGPU for cluster computing it's ideal to have open source drivers, especially if you're not going to release Linux/Unix drivers yourself.

            • by ceoyoyo ( 59147 ) on Thursday November 13, 2008 @10:30PM (#25756507)

              Any my point is, why? All you need is a decent API. Claiming it's useless without open source drivers is just a silly ruse by an open source zealot to advance an agenda.

              Open source has a lot of things going for it, but it's more fanatical followers are not among them.

              • by lysergic.acid ( 845423 ) on Thursday November 13, 2008 @11:17PM (#25756829) Homepage

                what agenda are they advancing? the agenda of being able to use this feature on the platform they are running?

                sure, if all hardware manufacturers were in the habit or releasing Unix & Linux drivers then closed-source binaries and a decent API would be fine. but the reality is that many manufacturers do not have good Linux/Unix support. that is fine. but if they want to leave it to the community to develop the Linux/Unix drivers themselves then it would be really helpful to have open source Windows drivers to use as a template.

                it's not useless to you since you're running Windows, but not everyone uses a Windows platform for their research. for those people it would be useless without either, open source drivers or a set of Linux/Unix drivers. i mean, if you're already running a Beowulf cluster of Linux/BSD/Solaris machines then it might not be practical to convert them to a Windows cluster (can you even run a Beowulf cluster of Windows machines?), not to mention the cost of buying 64 new Windows licenses and porting all of your existing applications to Windows.

                it's probably an exaggeration to say that closed-source drivers are useless. and perhaps AMD will release Linux/Solaris/Unix drivers. but if they're not going to then open sourcing the Windows drivers and the hardware specs would be the next best thing. and the outcry for open source drivers isn't without some merit since past Linux support by AMD/ATI with proprietary drivers have left much to be desired, with Linux drivers only receiving updates half as a often as the Windows drivers and consistently underperforming against comparable graphics cards.

                • by ceoyoyo ( 59147 ) on Thursday November 13, 2008 @11:30PM (#25756927)

                  It's fine to lobby for open source drivers. It's also great if you want to run something on your chosen platform and you want the company who makes the hardware to support that. Both of those I can wholeheartedly support.

                  Claiming that something is useless without open source drivers is either dishonest or deluded. As I said, I don't think the important goals of the open source movement are served by either lying or ranting about your delusions.

                  • Well, since a lot of closed source drivers have redistribution issues, I would say that makes it more difficult to use with clusters of computers.
                • with Linux drivers only receiving updates half as a often as the Windows drivers and consistently underperforming against comparable graphics cards

                  If something hurts, stop doing it.

                  You expect the world to cater to your lifestyle choices. You made the choice to run a platform that isn't well supported by video card manufacturers. Either stop using the platform, or find video cards that work on your platform. What if there is no good video cards for your platform? Tough luck. Sorry. You should have cons

                  • well, now I know what follows arrogance: optimism.
                  • by lysergic.acid ( 845423 ) on Friday November 14, 2008 @12:16AM (#25757193) Homepage

                    i'm a graphic designer so run Windows. i haven't touched Linux or Unix in over half a decade. but i'm not a selfish jackass who thinks that only my needs are important, and as long as they are met everyone should just go to hell.

                    there's nothing arrogant about expecting hardware manufacturers to support the 3 most popular OSes: Windows, OS X, and Linux. and it's precisely because people understand that hardware manufacturers can't be expected to support every single OS out there (even well-known ones like Solaris, FreeBSD, BeOS, etc.) that people are pushing for open source drivers.

                    your mom may not have told you this, but businesses depend on their customers to make money. so listening to consumers and meeting consumer demands is generally a good idea (ever heard of market research?). by allowing their hardware to be used on a wider range of platforms they are broadening the market for their products.

                    AMD isn't in the business of selling video card drivers, just the video cards. that is why they have open sourced their Radeon drivers in the past. and if we were all as simple-minded as your mom, then no one would ever speak up for themselves. and hardware manufacturers aren't run by mind readers.

                    • Re: (Score:1, Troll)

                      by jsoderba ( 105512 )

                      Your mom might have told you that the customer is always right, but that is not actually true. When customers make a nontrivial request a wise manager will do a cost-benefit analysis. Is the gain from fulfilling the customers' request greater than cost of doing so? (Where cost includes direct cost of labor and materials as well as the PR cost of snubbing customers.) Developing and supporting drivers is a lot of work. So is writing and supporting documentation.

                      AMD did not open-source their Radeon drivers. Th

                    • your mom may not have told you this, but businesses depend on their customers to make money. so listening to consumers and meeting consumer demands is generally a good idea (ever heard of market research?)

                      There's your problem: for most big businesses, consumer != customer. As a consumer you have no importance as an individual, and often you are only vaguely relevant as a secondary market. Especially if you are a niche market.

                      AMD isn't in the business of selling video card drivers, just the video cards.

                      To a distributor. Who sells them to a retailer. Who sells them to you.

                      When your mom-and-pop computer shop opens their own chip manufacturing plant and sells direct to the public, then your opinion might start to matter.

                    • by Kjella ( 173770 )

                      there's nothing arrogant about expecting hardware manufacturers to support the 3 most popular OSes: Windows, OS X, and Linux. and it's precisely because people understand that hardware manufacturers can't be expected to support every single OS out there (even well-known ones like Solaris, FreeBSD, BeOS, etc.) that people are pushing for open source drivers.

                      Most people just want the magic driver fairie to come, and don't really care if it's open or closed source but they've heard that the OSS community will write drivers for their ultra-obscure platform even though noone else will. Yes, sometimes the community writes its own driver but in many other cases you see companies still have to do most of the work to make an OSS driver happen. Which usually get flamed if they're not on par with the closed source Windows drivers anyway. Writing up and clearing detailed

                  • by Fulcrum of Evil ( 560260 ) on Friday November 14, 2008 @01:17AM (#25757529)

                    You expect the world to cater to your lifestyle choices.

                    Of course - we are a customer base, and we expect to have our needs catered to.

                    ou made the choice to run a platform that isn't well supported by video card manufacturers. Either stop using the platform, or find video cards that work on your platform. What if there is no good video cards for your platform? Tough luck. Sorry. You should have considered that before installing the OS, eh?

                    Third choice: lobby for support from major chip manufacturers. What makes you think a large group of users is powerless, anyway? 5% of 100M is 5MM people, with some of them having cause to buy 100+ units of product.

                    It is beyond arrogant to expect the world to cater to your choice of operating system.

                    Don't need the world. Just need a couple companies.

                  • You understand that you are advocating less freedom for no reason right?
                  • What if there is no good video cards for your platform? Tough luck. Sorry. You should have considered that before installing the OS, eh?
                    It is beyond arrogant to expect the world to cater to your choice of operating system.

                    Please consider the subject again.

                    Not everyone is using his/her GPU to play WoW on Windows.
                    We are talking about scientific computations. GPGPUs, clusters, etc...

                    Before installing the OS, the scientists have also to consider which is best for their work.
                    Currently, hardly anything beyond a UNIX-based OS is able to handle gracefully the load.
                    In addition, the scientists are trying to build massively parallel computing stations. Not just your average "one 4 quad core CPU, 2 or 3 GPUs" enthusiast gamer rig.
                    We're

                  • It's called 'being a customer'. That's what these freakish and weird entities called 'customers' do. They ask for you to support them in the choices they make. I know, it's a strange and bizarre concept. Those pesky customers should just be happy with what you decide in your great beneficence to give them, but strangely they always seem to want more.

              • by cj1127 ( 1077329 )
                True. I may be sticking my neck out, but from my point of view I'd say that the open-source movement could really use losing Stallman in order to gain market share. Closed-source software isn't immoral, not is it required for a large number of applications, so why are Stallman et al pushing for "all or nothing" when it comes to open source?
                • by spauldo ( 118058 )

                  They're pushing for it because someone has to.

                  If there weren't people out there pushing the cause in its purest sense, we wouldn't have half the progress we do. Stallman has a role, and it's a vital one. If it wasn't him (or at least his push for a total free software world) the GNU system wouldn't have started up, and we wouldn't have Linux at all.

                  That doesn't mean you have to buy into it. Everyone pretty much knows there's lots of room for closed software. Stallman just represents the extreme, which y

              • Re: (Score:3, Interesting)

                by Anonymous Coward

                > Any my point is, why? All you need is a decent API.

                Well, that assumes closed source can provide a decent API.
                Considering the huge amount of bugs even the CUDA compilers have (and they are fairly good compared to others, particularly FPGA synthesis tools) there is a severe risk that you will get stuck in your project without the possibility to do _anything_ about it.
                Closed source also leads to such ridiculousness as the disassembler only being available as a third-party tool (decuda) making it even hard

          • None of those requires the source code for the driver.

            And Google just serves web pages, which doesn't require access to the source code for the web server. That doesn't mean that they'd be caught dead using a binary blob for a web server. It's just not an acceptable business risk.

            It's like having backups. Sure, restoring from backups isn't part of the plan, but not having backups isn't a risk that anyone takes with important business data. Personally, I'd consider my research data to be even more important

            • And someday (Score:4, Funny)

              by coryking ( 104614 ) * on Thursday November 13, 2008 @10:37PM (#25756575) Homepage Journal

              Lord only knows what kinds of boobie traps they put in power supplies. The CIA and the NFL probably know more about you then you realize thanks to that "120V power supply" on the back of each computer in Google's data center. I mean, unless you have the schematics, how do you really know what it is doing?

              You dont. Neither does Google. The wise are already beginning to short GOOG. Will their shareholders wake up and demand schematics? Only time will tell.

              • Re: (Score:3, Interesting)

                Hmmm. The old "I don't know everything about everything, therefore I don't care if I don't know everything about something" argument. Gets 'em every time..
              • The CIA and the NFL probably know more about you then you realize ...

                Should I be concerned that Al Michaels and John Madden might pull up in front of my house, driving a nondescript white panel van bristling with antennas (for continuous game coverage, no doubt) wondering why I wasn't watching the Cowboys/Redskins game last night?

              • I don't need a schematic for the power supply. It's not that complex. I can open it up, trace the circuit with a probe, and learn everything there is to know about it without much difficulty.

                Keeping error under control during data analysis is hard enough already. The GP is perfectly reasonable in wanting to be able to examine every bit of his toolchain, especially when he might be purchasing very expensive CPU time to actually run his code, and surprisingly small data errors could completely hose his result

            • by ceoyoyo ( 59147 ) on Thursday November 13, 2008 @11:00PM (#25756729)

              Like I said about zealots making things up....

              Your argument about the business world not using non-open source is spot on. Excellent example. Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for! Never mind closed source apps! Naturally they only buy video cards that have open source drivers too.

              • Re: (Score:2, Interesting)

                Like I said about zealots making things up....

                I think you're letting your personal ideology cloud your view of the world around you.

                Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for!

                Most major companies don't. They happily run employee desktops on Microsoft Windows, because they can easily swap them out when they break. They run critical legacy systems on IBM mainframes (or whatever). And they run new critical systems on platforms that are almost entirely FO

                • Re: (Score:3, Informative)

                  by drinkypoo ( 153816 )

                  I think you're letting your personal ideology cloud your view of the world around you.

                  I think you missed the sounds of sarcasm. Not too hard to do, as it was confusingly mixed with some other, simpler HHOS-style text.

                  Of COURSE nobody would trust their critical systems to, say, an OS they don't have the source for!

                  Most major companies don't. They happily run employee desktops on Microsoft Windows, because they can easily swap them out when they break.

                  Not a critical system, then. Critical systems are the machines that cause serious problems when they fail.

                  They run critical legacy systems on IBM mainframes (or whatever). And they run new critical systems on platforms that are almost entirely FOSS.

                  IBM sells more Linux than AIX today, and they sell quite a bit of Linux across their line, at least on the systems-formerly-known-as-S/390-and-RS/6000. I'm not sure if I'm disagreeing with you, or proving your point, but whatever.

                  All I know for sure is that the EULA for Wind

                  • All I know for sure is that the EULA for Windows prohibits using it to control a nuclear reactor, or at least it used to, and it bloody well should.

                    But voting machines, that can ultimately send hundreds of thousands to war, no problemo! [youtube.com]

          • "The hard core ones use assembly."

            Not on GPUs. The asm for any particular GPU line is not published and (according to sources at GPU companies) changes fairly often. Drivers that expose the GPU via OpenGL, DirectX, CUDA, etc. hide the programmer from this fact and allow the manufacturers to change the underlying hardware without changing the programmatic interface. These higher level APIs _are_ the assembly language for GPUs.

            As an aside, this is the main reason CorePy (an Python-based assembly system I d

        • Re: (Score:1, Redundant)

          by aliquis ( 678370 )

          WhatÂ?!! You can use your graphics card to get screwed!?!

          Crossfire HD4870X2, BABY!

          Length? What the fuck do I care, my case is big enough.

        • Seeing as you are currently using yours to browse slashdot, you obviously aren't one of those people.
      • In my case it is. I only run Linux :p

    • by TubeSteak ( 669689 ) on Thursday November 13, 2008 @09:59PM (#25756203) Journal

      Surely I'm not the only one who thinks this'll be useless without open-source drivers, so you can actually make your fancy cluster use these vector-processing units.

      You may or may not be surprised by this, but not all of the magic happens in hardware, which is why you don't see open sourced drivers for a lot of stuff.

      Sometimes it just makes sense to put the optimizations in the driver, so that when you tweak them later, you don't have to flash the BIOS.

      • Think.

        GPGPU is USELESS unless it is in hardware. It is were a "driver thing", it couldn't possibly work. Indeed, the GPGPU user would get worse performance, because the multiple execution units would have to be simulated...

        So, these "optimizations" are not in the driver.

      • You may or may not be surprised by this, but not all of the magic happens in hardware, which is why you don't see open sourced drivers for a lot of stuff.

        That was true in the mid-90s, when drivers meant the difference between 10fps and 30fps. We're in the late 00s now, where drivers are as thin as possible, so that APIs are sitting right on top of hardware commands, and compiling shaders is the about the only thing left to do with software alone.

        Any such optimizations or color corrections are typicall
    • Re: (Score:3, Insightful)

      Plenty of people seem to be getting serious work done with NVidia's proprietary Linux CUDA drivers.

    • blah blah blah (Score:2, Insightful)

      by coryking ( 104614 ) *

      You chose to run on a platform knowing full well these things aren't likely to be supported. Very little sympathy from me. Sorry.

      If you want to encourage more drivers on your platform of chose, perhaps you might consider making it easier for hardware companies to target your kernel. Maybe consider, oh I don't know, a stable, predictable ABI?

      Maybe loose the attitude as well. The world doesn't owe you or your OS choices anything. All you can do is focus your efforts at making your platform of choice at

    • by Aardpig ( 622459 )
      You may not be the only one, but that doesn't mean you're not wrong. I'm currently working on using NVIDIA GPUs w/ CUDA to do seismic modeling of massive, luminous stars. The fact that the drivers are closed-source doesn't count for shit.
    • Yeah, no, you're the only one.

      Unless, of course, you feel like writing the support into the open drivers. I've got other things to do, as do most of my compatriots.

      Not to be insulting, but OpenGL 2.0 support, DRI2, KMS, are all of much higher importance than yet another GPGPU language, especially since Radeons are still not on the Gallium system yet.

      ~ C.

    • by yabos ( 719499 )
      If that were true then there wouldn't be any software today. You don't need open source drivers to use the published API. That's what an API is for. You certainly won't need the source code to the Mac GPU drivers to use Apple's upcoming OpenCL APIs.
    • Surely I'm not the only one who thinks this'll be useless without open-source drivers

      Well, at least today's GPGPU article is about AMD.
      Which currently is putting in efforts and releasing specs to help design open drivers [radeonhd.org].

      Their current stream computing stack (Brook+) is a modified and adapted variant of the opensource BrookGPU - and is itself open.
      And they made promises that in the long term, the opensource drivers will also support GPGPU.

      The other stack toward which they are showing interest is OpenCL. Which is an open standard, to be used by several partners. (like currently OpenGL and Ope

  • ...welcome our new GPGPU overlords.

    All I'm wondering is why it took so long? They are in the business of selling hardware, and if we can find a new use for it, then we're more likely to purchase AMD/ATI's hardware.

    • All I'm wondering is why it took so long? They are in the business of selling hardware, and if we can find a new use for it, then we're more likely to purchase AMD/ATI's hardware.

      A new driver that would turn a consumer-grade high-end video card into an easy-to-use midrange supercomputer? More powerful than the stuff the US nuclear weapon programs were using when they designed the last batch of bombs?

      Maybe they were waiting for a congress and president that they don't think will declare their graphics cards

      • They're produced outside the US and the technology to produce processors isn't exactly some US trade secret the rest of the world doesn't have.

        But you keep up that baseless paranoia.

      • Re: (Score:3, Funny)

        by zippthorne ( 748122 )

        Precisely. The Democrats would never tinker with computing hardware or software (like trying to force everyone to use the same, weak, encryption algorithm AND turn over their keys to the government) like the Republicans would.

    • because Nvidia doesn't have the same API set, so any GPGPU processing has to be 'lowest common demoninator' for consumer-marketed software.

      In other words, unless you have the luxury of specifying a particular make of graphics card, you won't write code to take advantage of these new APIs because you will immediately limit your target audience (and no doubt several people will criticise it for not running on their hardware and give it a bad reputation).

      Now, if you could make it so it ran in hardware on your

    • That's normal. Advances in consumer hardware do take a long time to develop. It's risky, needing a very elaborated business plan, research is slow, since everything must be checked several times before it is tried and most of the times it needs modifications on manufacturing plants, what takes a lot of time.

      Poeple say that normal developing time for VLSI is around 7 years. I can't know if that holds today, since I'm not on this market, but on the past I've seen several examples of that.

  • Actually what I had in mind when I mentioned it on the previous story is that AMD could make the SIMD part of the CPU. That way you could use whatever video card you wanted and have the advantages SIMD brought.

  • by Amphetam1ne ( 1042020 ) on Thursday November 13, 2008 @10:02PM (#25756247)

    Last time I looked at the Catalyst/Avivo hardware transcoding software it was somwhat less usable than I hoped. The thing that killed it for me was the lack of batching or cl options. It actually turned out to be less time consuming for me to use a software transcoder with batching and leave it on overnight and while I was at work than it did to go back over to the pc after every finished run to setup the next file for transcoding on the vid card. The quality of the video that was transcoded in hardware was a bit on the patchy side as well.

    Somthing that I would be interested in is integrating support into burning software to speed up the transcoding side of DVD video burning. Unfortunately it doesn't look like it's happening any time soon. I think the problem is that by the time technology has matured enough to make it viable, the increase in CPU speed will have made it redundant.

    • the increase in CPU speed will have made it redundant.

      I think the deal is, CPU speed is what has matured. These days, we dont get fast by increasing clock speed, we get fast by going massively parralel. Hence the article and the hardware. The GPU is reaching the point where it is a fancy specialized CPU. Kind of like the FPU's back in the day before 486 DX's were king.

      No matter what happens though, the "CPU" as we define it currently will never be able to outperform what we now define as the "GPU" no ma

  • Open standard API (Score:4, Interesting)

    by Chandon Seldon ( 43083 ) on Thursday November 13, 2008 @10:02PM (#25756255) Homepage

    So... is there an open standard API for this stuff yet that works on hardware from multiple manufacturers?

    If not, developing for this feels like writing assembly code for Itanium or the IBM Cell processor. Sure, it'll give you pretty good performance now, but the chances of the code still being useful in 5 years is basically zero.

    • Re:Open standard API (Score:4, Informative)

      by Wesley Felter ( 138342 ) <wesley@felter.org> on Thursday November 13, 2008 @10:06PM (#25756285) Homepage

      OpenCL will be the standard; it should support real processors, ATI, NVidia, and maybe Cell if someone bothers to write a backend.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      No standard API yet, because the NVIDIA chips only work on integers and the Stream processors actually can do double precision. From a scientific computing standpoint, portability of my codes is almost as simple as a patching process to get the keywords correct (nevermind the memory handling and feedback, that is still pretty different) and that is only because of the floating point. It's a pain in the you know what trying to keep numbers straight when you have to multiply everything by 10000.

    • OpenCL is supposed to be it; its officially endorsed by Khronos (of OpenGL fame) and Apple. Release date is still unknown, drivers are even less known. Its promising a general purpose stream processing API that can inter-operate with OpenGL.

      I've been scouring today's press releases for OpenCL, and thusfar I've been extremely disappointed to hear numerous promises about Brook+ (the proprietary stream api AMD originally backed which I dont give a crap about) and nothing about OpenCL. AMD better fucking not

  • Low cost, low power CPUs. Already, 99% of people don't use 99% of the power of their CPUs 99% of the time. Why then, is AMD banking on people wanting massive power on specialized hardware that will require fancy compilers?

    I consider myself in the ever-dwindling group of "PC gamers," and even I am looking forward to the death of GPUs in my computer. I'm tired of building expensive and power-sucking desktops that go obsolete in 18 months. Instead of building a 1000 dollar desktop next year, I'm getting a lapt

    • by waferhead ( 557795 ) <waferhead&yahoo,com> on Thursday November 13, 2008 @10:41PM (#25756603)

      You do realize this article has ~absolutely nothing to do with gaming, or even normal users, right?

      The systems discussed using CUDA or GPGPU will probably spend ~100% of their lives running flat out, doing simulations or such.

      Visualize a Beowulf Cluster of these. Really.

      • by rm999 ( 775449 )

        My point had more to do with the future of AMD than with video games. I was talking about what 99% of users need from their computers, not what 0.000001% of people want (super computer architects). AMD acquired ATI because they thought that was the future of mainstream computing, not because they wanted to serve the niche supercomputer community.

        I'm asserting that AMD made a huge mistake buying ATI. I am the audience they meant to target, and they miserably failed. Now, the best they can do is serve a small

        • by ypctx ( 1324269 )
          Real speech recognition and communication will require a lot of power. As well as real 3D games and other relaxation/leisure environments. Current games are just Wolfenstein with more details. Human brain is at never ending quest for being entertained - you show it something new, and it won't want the old one anymore. We are underutilizing our hardware now but that's because it is not fast enough to run next gen applications, or because we haven't yet discovered how to make such applications. But it will co
          • by rm999 ( 775449 )

            My research/education is in "artificial intelligence", and I completely agree that more computing power is what we need. I don't see the trend going that way, however. More and more people are buying weak laptops (netbooks), and less people are buying desktops. It's a very real trend, and it's going to make putting AI into our computers impossible in the near-term future.

        • I personally understand AMDs purchase of ATI, it mostly suffered from really bad timing, more or less coinciding with the release of the Barcelona and a severe money crunch.

          Long term, the integration of AMD & ATI should prove to generate very interesting products, on both ends of the power/performance spectrum.

          If they survive.

    • by coryking ( 104614 ) * on Thursday November 13, 2008 @10:48PM (#25756649) Homepage Journal

      Already, 99% of people don't use 99% of the power of their CPUs 99% of the time

      So by your logic, those people would be happy with a computer that was 1% as fast as what it is now?

      Make no mistake, once you actually hit that 1% of the time you need 100% of your CPU, the more the better. I can think of two horsepower intense things a normal, every day joe now expects his computer to do:

      1) Retouching photos
      2) Retouching and transcoding video (from camera/video camera -> DVD)

      Dont underestimate transcoding video either. More and more people will be using digital video cameras and expect to be able to output to DVD or Bluray.

      • by rm999 ( 775449 )

        "So by your logic, those people would be happy with a computer that was 1% as fast as what it is now?"

        Well, no, and I think you answered your own question. But, to make it clear, that's why I said 99% of the time and not 100% of the time. Regardless, my point wasn't that we need less power, it's that the power we have now is adequate for most people. We don't need to jump through hoops to eeke out more raw computing power.

    • Low cost, low power CPUs. Already, 99% of people don't use 99% of the power of their CPUs 99% of the time.

      I can see you havent kept upgrading Microsoft Operating systems. Everything that used to run well on my XP machine after a fateful Vista upgrade crawls. Why should the machine take 1GB of RAM just to boot up and start operating, when all I do is just check mail.

      --
      talking about Windows in Slashdot; whining about it. I'm not new here.

    • Power efficiency is a good reason. If you want to play a video, your average Atom or ARM cpu might possibly have enough headroom to play your video, but it will consume close to its maximum draw to play it. A gpu on the other hand is specialized hardware that can do things like video playback with ease. Even rendering a media player visualization will heavily tax a cpu, but a gpu may hardly notice the load. Its easy to reduce these examples into something more all purpose: for nearly any task that is hi

    • by tacocat ( 527354 )

      I think you are half right.

      There are two developing markets for computers: cheap low power home users and high performance hobbyist, research, business. This second one used to belong to a formal server environment. This is being replaced by heavy workstations.

      Similarly, many server environments are being replaced by the pizza boxes and blade servers, making low power very important here.

      Many are starting to move into an architecture of modest home/desktop performance machines with some really impressive

    • People don't "need" it only because they don't have it. If you are a gammer, think about this chip as a very flexible physics processing unit, now if you were a home investor, you could think about it as an at-home market simulator. Of course, if you are doing any kind of CAD, the uses are obvious, as are the uses if you are doing any kind of image (or movie) manipulation. Of course, if could be used simply as a GPU (but more flexible, allowing some interesting optimizations), but the other uses are just to

  • If you let them build it, they will come.
  • by Louis Savain ( 65843 ) on Thursday November 13, 2008 @10:38PM (#25756579) Homepage

    And so are Intel and Nvidia. Vector processing is indeed the way to go but GPUs use a specific and highly restricitve form of vector processing called SIMD (single instruction, multiple data). SIMD is great only for data-parallel applications like graphics but chokes to a crawl on general purpose parallel programs. The industry seems to have decided that the best approach to parallel computing is to mix two incompatible parallel programming models (vector SIMD and CPU multithreading) in one heterogeneous processor, the GPGPU. This is a match made in hell and they know it. Programming those suckers is like pulling teeth with a crowbar.

    Neither multithreading not SIMD vector processing is the solution to the parallel programing crisis. What is needed is a multicore processor in which all the cores perform pure MIMD vector processing. Given the right dev tools, this sort of homogeneous processing environment would do wonders for productivity. This is something that Tim Sweeny [wikipedia.org] has talked about recently (see Twilight of the GPU [slashdot.org]). Fortunately, there is a way to design and program parallel computers that does not involve the use of threads or SIMD. Read How to Solve the Parallel Programming Crisis [blogspot.com] for more.

    In conclusion, I will say that the writing is on the wall. Both the CPU and the GPU are on their death beds but AMD and Intel will be the last to get the news. The good thing is that there are other players in the multicore business who will get the message.

    • Comment removed based on user account deletion
      • Nope. Sorry. The Cell Processor is a perfect example of how not to design a multicore processor. Just my opinion, of course.

        • Re: (Score:3, Interesting)

          by hackerjoe ( 159094 )

          Wow, that's really fascinating, because the Sweeney article you mention has him going on and on about generally programmable vector processors which make heavy use of that SIMD thing you so hate.

          Oh wait, I didn't mean fascinating, I meant boring and you're an idiot. Engineers don't implement SIMD instructions because vector processors are easy to program, they implement them because they are cheap enough that they're worth having even if you hardly use them, never mind problem domains like graphics where yo

    • Re: (Score:3, Insightful)

      Wait a minute. Typically the SIMD of GPU commands is for handling vector triples (coordinates or colors) and matrices, which completely translates into supercomputer tasks that are being talked about in TFA: "tasks such as scientific simulations or geographic modelling".

      GPUs nowadays have hundreds of parallelized vector/matrix processors and the drivers & hardware take care of scheduling them all through those pipelines for you. Within the targeted fields, I can't see a downside of this sort of develo

    • These products are targeted at the scientific computing market. Almost all of that software is written to use BLAS or something similar. SIMD processors are plenty well suited to the matrix and vector operations needed for that. Sure, MIMD architectures may be theoretically nicer, but the way to establish the stream computing market is to create a product that can accelerate current software, which means providing an interface similar to what current software uses (ie. BLAS).

    • by dbIII ( 701233 )

      Vector processing is indeed the way

      That's what it looks like, until after a few weeks of used car salesman antics you get a quote on a machine with a cell processor. AMD and Intel etc have a solid future even in the numerical processing field for as long as anything that does vector processing requires black ops military budgets. Why buy one decent machine when you can get twelve that give you 3/4 of the speed each?

      I don't think we'll see much change until Intel or AMD step into that arena.

    • You seem to be a bit lost. This GPGPU deal is not a means to get the holy grail of multicore computing nor was it ever seen as such. It is just a means to enable the users, whether they are regular folks or scientists in computing-intensive fields like structural engineering and biomedical research, to harness the computing power of high power processors which happen to be very cheap and available in any store shelf in any computer store. It's a means to take advantage of the hardware that sits idle on your

    • by LordMyren ( 15499 ) on Friday November 14, 2008 @05:05AM (#25758391) Homepage

      Theres a lot of tall claims here, but the one that sticks out as most needing some kind of justification is that "The industry seems to have decided that the best approach to parallel computing is to mix two incompatible parallel programming models (vector SIMD and CPU multithreading)". GPU's mix these models fine and I havent seen anyone bitching about the thread schedulers on them or bitching about not being able to use every transistor on a single stream processor at the same time. How you can claim these models are incompatible, when in fact its the only working model we have and it works fine for those using it, is beyond me. You criticize the SIMD model, but the GPU is not SIMD: it is a host of many different SIMD processors, and that in turn makes it MIMD.

      Moving on to what you suggest, I fail to see how superscalar out-of-order execution is not MIMD, and we've been doing that shit for years. The decoder pulls in a crap ton of things to do, assigns them to work units, and they get crunched in parallel. Multiple inputs, multiple data sources, smart cpu to try to crunch it all. Intel tried to take is a step further with EPIC explicitly parallel instruction computing and look how that fucking turned out: how many people here know what Itanium even fucking is?

      The "how to solve the parallel programming crisis" link is pretty hilarious. Yes, lists of interlocking queues are badass. Unfortunately the naive implementation discussed at the link provide no allowances for cache locality. In all probability the first implementation will involve data corruption and crap performance. Ultimately the post devolves into handwaving bullshit that "the solution I am proposing will require a reinvention of the computer and of software construction methodology". This is laughable. Just because stream processing isnt insanely easy doesnt mean we have to reinvent it just so we arent burdened with dealing with multiple tasks. Even if you do reinvent it, as say XMT has done, you still have to cope with many of the same issues (xmt's utility in my mind is a bridge between vastly-superscalar and less-demanding EPIC).

      Good post, I just strongly disagree. The GPU is close to the KISS philosophy: the hardware is dumb as a brick and extremely wide, its up to the programmers to take advantage of it. I find this to be ideal. I've seen lots of muckraking shit saying "this is hard and we'll inevitably build something better/easier" but a lot of people thought the internet was too simple & stupid to work too.

      • See title.

        A year ago, I started taking the anti-epileptic medication, Topamax, and I started to suffer from psychosis shortly there after. I am better now though, now that I no longer take Topamax. Psychosis is a rare, but potential, side effect of Topamax, and there may be genetic factors that influence the occurrence of this particular serious side effect. A casual term that a layperson might use to describe me is that I was psychotic, but saying that I was suffering from psychosis is probably a better

    • What is needed is a multicore processor in which all the cores perform pure MIMD vector processing.

      Ummmm it seems to me that if it's MIMD then it's not vector processing.

  • by rsmith-mac ( 639075 ) on Thursday November 13, 2008 @10:54PM (#25756697)

    Consumer products using GPGPU tech will (and indeed are) happening, but it's sure as heck not going to be on ATI's GPUs. The performance is there, but the development tools are a joke. The main runtime (Brook+) is the technological equivalent of giving yourself a root canal every time you program in it, and the rest of the Stream SDK supporting toolset is more or less entirely AWOL. It's rather unfortunate, but compared to NVIDIA's CUDA the whole system is a joke; CUDA is an excellent toolkit that someone clearly put a lot of thought in to in order to meet the needs of developers, while the Stream SDK/Brook+ still feels like it's a research project that nobody optimized for commercial use.

    The hardware is there, but no one in their right mind is going to program with AMD's software. Everyone is waiting for OpenCL. Even if it doesn't really take off, it still can't possibly be worse than the Stream SDK.

    • by LordMyren ( 15499 ) on Friday November 14, 2008 @05:09AM (#25758405) Homepage

      Yes yes yes and yes.

      However, AMD already said it was backing OpenCL. I'm pissed as fuck I didnt hear anything about OpenCL this press cycle, but they're the only major graphic company to have ever stated they were getting behind OpenCL: I'm holding onto hope.

      You're right: no one uses Brook. Trying to market it as any way part of the future is a joke and a mistake: a bad one hopefully brought on by a 2.50$ share price and pathetic marketting sods. On the other hand, I think people using CUDA are daft too; its pre-programmed obsolesence, marrying yourself to proprietary tech that one company no matter how hard they try will never prop up all by themselves.

      OpenCL isnt due out until Snow Leopard, which is rumored to be next spring. Theres still a helluva lot of time.

    • given the recent track record of the khronos group ( anyone one remember the opengl 3.0 specs ) i don't think we'll see opencl anytime soon. i hope they don't screw that one up if it ever sees the light of day. cuda has gotten a lot of scientific attention in the past year. it's a nice paper machine, just take your favorite algorithm and port it even if it's not the best fit for the architecture. sure, missing cachin for global memory will bite you in the ass but it's still better than ati's close to metal
  • By leveraging thousands of processing cores on a graphics card for general computing calculations, tasks such as scientific simulations or geographic modelling, which are traditionally the realm of supercomputers, can be performed on smaller, more affordable systems.

    So if I understand this right, they're adding transistors to the graphics card that would normally be added to the CPU. How exactly does that help? It's not really a graphics cards anymore if you're doing general processing on it. Why not just p

    • by Nikker ( 749551 ) *
      This is just the first iteration.

      First they put together a bit of a hack and see what happens. I think it would be more cost effective to deliver both solutions on the die rather than an expansion slot, it will also make the unit more potent clock for clock since it is sitting on all the bandwidth the mother board and north bridge can supply. Once we start getting good communication going with the device we can start playing with it.

      I have to admit my self though with everyone complaining about open
    • Re: (Score:3, Informative)

      by marcosdumay ( 620877 )

      "It's not really a graphics cards anymore if you're doing general processing on it."

      It is still capable of hight speed rendering, so it can still be used as a GPU.

      "Why not just put that horsepower on the CPU?"

      Because the processors at a GPU are simpler, it is way cheaper to add power there. Also, there is nothing stoping AMD to add transistors at both, since they are at different chips.

      "Have AMD forgotten that they're still a CPU business?"

      They are also at the GPU business, and this chip is on both.

  • Arcsoft (Score:3, Funny)

    by Toll_Free ( 1295136 ) on Thursday November 13, 2008 @11:23PM (#25756879)

    Is this the same arcsoft that gave us .arc back in the 80s?

    If so, I'd like to get rarsoft involved. Any idea how long it takes my P233 machine to unrar a .264 video? I mean, like HOURS.

    Imagine if I could harness my video card, an S3Virge.... holy bat, crapman, I'd probably cut my derar times by what, a third?

    --Toll_Free

  • My gut feeling is that they're talking shareholderese, and it has no bearing whatsoever what technical merit their platform has... But how they can sell it to investors matters to them.

  • by Brit_in_the_USA ( 936704 ) on Friday November 14, 2008 @11:35AM (#25760663)
    The traction for this will come when someone releases opensource audio, and later, video encoder libraries using GPU acceleration based upon this (or another) abstraction layer.

    MP3, OGG, FLAC - get these out the door (especially the first one) and a host of popular GUI and CLI encoders would jump on the bandwagon. If there are huge speed gains and there are no incompatibility issues because the abstraction layer and drivers are *stable* and retain *backwards compatibility* with new releases then more people will see the light and there will be pressure to do the same with video encoders. Before you know it the abstraction layer would become defaco and all GPU makers would follow suit - at that point we (the consumer) would win in having something that works on more than one OS on one particular card from one particular GPU maker and we can get on with some cool innovations.
  • a process known as General Purpose computing on Graphics Processing Units (GPGPU).

    Eugene: Jeep jeep!

  • Avalda helps developers work on parallel programming with a compiler that transforms a subset of F# to a hardware description language (HDL) suitable for FPGAs. So you can code regular F#, with parallel semantics, and run it on an FPGA. This approach uses the truly fine-grained parallelism of FPGAs (eg, Xilinx's million gate Spartan 3 FPGA). It works with the free Visual Studio 2008 shell (integrated mode). More info can be found at the following urls: http://www.avalda.com/index.php/blogs/12-avalda-fpga- [avalda.com]

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...