Follow Slashdot stories on Twitter


Forgot your password?
AMD Graphics IT

AMD Preps For Server Graphics Push 41

Nerval's Lobster writes "AMD named John Gustafson as senior fellow and chief product architect of AMD's Graphics Business Unit, the former ATI graphics business unit. Gustafson, known for developing a key axiom governing parallel processing, will apply that knowledge to AMD's more traditional graphics units and GPGPUs, co-processors that have begun appearing in high-performance computing (HPC) systems to add more computational oomph via parallel processing. At the Hot Chips conference, AMD's chief technical officer, Mark Papermaster, also provided a more comprehensive look at AMD's future in the data center, claiming that APUs were the keystone of the 'surround computing era,' where a wealth of data — through sensors, gestures, voice, augmented reality, metadata, and HD video and graphics — will need to be contextualized, analyzed, and either encrypted or assigned privacy policies. That, of course, means the cloud must shoulder the computational burden."
This discussion has been archived. No new comments can be posted.

AMD Preps For Server Graphics Push

Comments Filter:
  • by Anonymous Coward

    Nvidia could really use some competition in the server space. The render farms and most GPGPU (or CUDA) are pretty much completely dependent on Nvidia.

  • You mean, all that stuff from surveillance cameras, etc? Yeah, we're gonna need LOTS of processing power for the Total Surveillance State!
  • They want a co-processor, so build one, or add extensive FPGA capabilities. Don't just put in a GPU and disconnect the monitor, make something more specifically applicable to the task at hand.
    • by gman003 ( 1693318 ) on Wednesday August 29, 2012 @05:26PM (#41172925)

      Except a modern GPU is basically a coprocessor that, 99% of the time, is used to run a library that primarily does graphics. Rendering, shading, transformation, those are now all done "in software". The only things still done "in hardware" are texture lookups and video output (turning an int[4][1080][1920] into a DVI or HDMI or VGA or whatever signal.

      They're also a pretty high-volume market, so you get them much cheaper than you would a custom-built coprocessor or even FPGA, and they're *probably* better-designed than the one you would make, as they have entire teams of professionals working on them.

      Also, both nVidia and AMD already make "compute-only" cards - nVidia under the brand "Tesla", AMD under the brand "FireStream".

      • There is no longer FireStream. Instead we now have FirePro covering both workstation (as FirePro W) and server graphics (compute) (as FirePro S)
    • by Creepy ( 93888 )

      You'd be surprised at how useful servers with GPUs are these days. When you're talking about clients like iPad and android devices, often the rendering is done server side and then sent to the client. A (CAD related) product I work on renders thumbnails using a server GPU. There is also a game service that does all rendering server side and sends it to a display (often a TV).

  • That oughta work (Score:4, Insightful)

    by Cute Fuzzy Bunny ( 2234232 ) on Wednesday August 29, 2012 @04:57PM (#41172587)

    Geez, didn't we have this stuff years ago, only it was called mainframes and minicomputers?

    Someone refresh my memory as to why we fled those for PC's? Oh yeah, it cost too much to centralize, the 'one size fits all' solutions actually fit no one, and it took too long to wait for someone to fix things or come up with new tools.

    Same problem with "the cloud". Good luck with it.

    • by Anonymous Coward on Wednesday August 29, 2012 @05:13PM (#41172779)

      We have things we didn't have last time.
                Massive central storage
                Enormous bandwidth
                Excellent frameworks for distributed processing (no RPC does not count)

      Long ago.. your Cloud had to be custom built for the app. EC2 doesn't have that restriction. I know the people who developed S3. They had no idea they'd be hosting thier Killer App. (Netflix) at design-time. It's that flexible.

      Plus, we now have PCs. No one is saying we have to go back to thin clients, you can keep your PC, and you the cloud where it excels. Gmail and Netflix Streaming are both things for which I've done the equivalents on home servers, and they don't hold a candle to the cloud versions.

      • Yeah, the virtualization instruction flagset in the Intel/AMD processors that were invented only a couple of years in the past have nothing to do with any of this.

  • I have had no fun with their software on Linux or Windows. Then again, Nvidia is not much better.
    • Re: (Score:2, Interesting)

      by sl3xd ( 111641 )

      Didn't you know? They're open source now. Fix the problem yourself!

      Sarcasm aside, I feel AMD open sourcing the drivers was more because they're throwing up their hands in surrender; they can't manage it themselves, so they're asking for outside help.

      AMD also provides a library that makes it easy to write a userspace program to disable all fans and thermal throttling on the GPU - melt the thing; maybe even start a fire... useful feature, that.

      The beauty is that if a user can run a GL program (or even a GPU c

  • by sl3xd ( 111641 ) on Wednesday August 29, 2012 @05:46PM (#41173145) Journal

    There's a big problem, however: []

    To run apps that use AMD's GPU's remotely (ie. not from a local X11 session - and I mean "Local X11 session"), you have to open a security hole so big you can fit Rush Limbaugh's ego through it.

    * Log into the system as root.
    * Add "Xhost +" to your X11 startup config (so every X session allows anybody to access it... with root permissions)
    * chmod ugo+rw /dev/ati/card*

    I asked a group of devs from how stupid it was... the short answer is "how stupid is giving root access to everybody?"

    So, I asked AMD when they were planning on fixing the problem.

    Short answer: Not for the foreseeable future.

    I seem to recall a similar issue where CERT told users not to use AMD drivers for Windows, because it forces Windows to disable many of its security features.

    I'm sensing a trend...

    Do you want this kind of irresponsibility in the datacenter? EVER?

    • by antdude ( 79039 ) on Wednesday August 29, 2012 @07:11PM (#41173839) Homepage Journal

      Which Windows' security features? I wasn't aware of this. :(

    • And this has what to do with the entirely new, not in production yet coprocessors that don't have any drivers for Windows, much less linux yet? They're talking about an entirely new chip that will be geared towards handling large amounts of video in a server environment. NOT your $100 graphics card. I'd imagine what they are going to produce will go into server farms with custom made software.
      • by sl3xd ( 111641 )

        NOT your $100 graphics card. I'd imagine what they are going to produce will go into server farms with custom made software.

        I couldn't care less about desktop graphics; it just isn't interesting to me.

        My original post was about my experiences with their $1-2k FirePro boards that compete with nVIDIA's Tesla & Quadro, to be slotted into several hundred nodes of a supercomputing cluster. If that isn't a server environment, then what is?

        I hate to break it to you, but AMD's attention to software, even for a

  • by Anonymous Coward

    I have waited patiently for Intel's offering 'Knights Corner' now rebranded as "Xeon Phi". We have been tempted with 64+ cores and 4 threads per core. We have been tempted with 'run existing software'. But I don' see anything available in stores. That they aren't pushing product into stores means the thing will be gawd-awful expensive, production is limited, and they don't want people to spend a grand or two, and create amazing software around the hardware. Instead, the word 'Xeon' means in general 'no

Don't be irreplaceable, if you can't be replaced, you can't be promoted.