Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Graphics IT

AMD Preps For Server Graphics Push 41

Nerval's Lobster writes "AMD named John Gustafson as senior fellow and chief product architect of AMD's Graphics Business Unit, the former ATI graphics business unit. Gustafson, known for developing a key axiom governing parallel processing, will apply that knowledge to AMD's more traditional graphics units and GPGPUs, co-processors that have begun appearing in high-performance computing (HPC) systems to add more computational oomph via parallel processing. At the Hot Chips conference, AMD's chief technical officer, Mark Papermaster, also provided a more comprehensive look at AMD's future in the data center, claiming that APUs were the keystone of the 'surround computing era,' where a wealth of data — through sensors, gestures, voice, augmented reality, metadata, and HD video and graphics — will need to be contextualized, analyzed, and either encrypted or assigned privacy policies. That, of course, means the cloud must shoulder the computational burden."
This discussion has been archived. No new comments can be posted.

AMD Preps For Server Graphics Push

Comments Filter:
  • by sl3xd ( 111641 ) on Wednesday August 29, 2012 @06:46PM (#41173145) Journal

    There's a big problem, however: http://developer.amd.com/sdks/AMDAPPSDK/assets/App_Note-Running_AMD_APP_Apps_Remotely.pdf [amd.com]

    To run apps that use AMD's GPU's remotely (ie. not from a local X11 session - and I mean "Local X11 session"), you have to open a security hole so big you can fit Rush Limbaugh's ego through it.

    * Log into the system as root.
    * Add "Xhost +" to your X11 startup config (so every X session allows anybody to access it... with root permissions)
    * chmod ugo+rw /dev/ati/card*

    I asked a group of devs from X.org how stupid it was... the short answer is "how stupid is giving root access to everybody?"

    So, I asked AMD when they were planning on fixing the problem.

    Short answer: Not for the foreseeable future.

    I seem to recall a similar issue where CERT told users not to use AMD drivers for Windows, because it forces Windows to disable many of its security features.

    I'm sensing a trend...

    Do you want this kind of irresponsibility in the datacenter? EVER?

  • by sl3xd ( 111641 ) on Wednesday August 29, 2012 @07:00PM (#41173267) Journal

    Didn't you know? They're open source now. Fix the problem yourself!

    Sarcasm aside, I feel AMD open sourcing the drivers was more because they're throwing up their hands in surrender; they can't manage it themselves, so they're asking for outside help.

    AMD also provides a library that makes it easy to write a userspace program to disable all fans and thermal throttling on the GPU - melt the thing; maybe even start a fire... useful feature, that.

    The beauty is that if a user can run a GL program (or even a GPU compute job), you can fry the GPU.

    Good stuff, those AMD drivers...

  • by antdude ( 79039 ) on Wednesday August 29, 2012 @08:11PM (#41173839) Homepage Journal

    Which Windows' security features? I wasn't aware of this. :(

  • by sl3xd ( 111641 ) on Wednesday August 29, 2012 @10:37PM (#41174775) Journal

    I think AMD is jumping into the arena because they feel they have to:

    - NVIDIA is already making quite a splash in big data processing with their many-core GPGPU offerings
    - AMD already offers their FirePro line to compete with NVIDIA's Tesla and Quadro
    - Intel is entering the arena with their MIC/Xeon Phi product line (http://en.wikipedia.org/wiki/Intel_MIC)

    AMD apparently feels they have to go down a similar path. Hopefully they will do it in a way better than is possible with their competition's offerings; NVIDIA doesn't build a full CPU on-die with their GPU, and Intel appears to have chosen not to.

    Additionally, NVIDIA's and presumably Intel's many-core offerings can easily swamp the latest PCIe Gen3 bus with the number of cores they have. The total memory per core on the GPU or Phi device isn't that high, so it's very easy to become bounded by PCIe's I/O bandwidth - they have to transfer boatloads of data over the PCIe bus.

    For some workloads, you can get some great performance gains; it's also important to remember that while NVIDIA (for one) likes to trumpet their 20-30x performance increase, the fact is they're cherry-picking workloads that are well-suited to their product. In my experience, it's 3-5x in the general case, because of their fundamental limitation in memory bandwidth between the PCIe card and other sources of memory - be it the "main" system memory, RDMA via InfiniBand, etc.

    I'm confident AMD will design decent hardware - they might even turn a corner and make great hardware again. Without the software to drive it, however, it's a lost cause - and I have zero confidence in AMD's ability to develop that software.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...