Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

NVIDIA To Push Into Supercomputing 103

RedEaredSlider writes "NVIDIA outlined a plan to become 'the computing company,' moving well beyond its traditional focus on graphics and into high-profile areas such as supercomputing. NVIDIA is making heavy investments in several fields. Its Tegra product will be featured in several mobile devices, including a number of tablets that have either hit the market already or are planned for release this year. Its GeForce lineup is gaming-focused while Quadro is all about computer-aided design workstations. The Tesla product line is at the center of NVIDIA's supercomputing push."
This discussion has been archived. No new comments can be posted.

NVIDIA To Push Into Supercomputing

Comments Filter:
  • by erroneus ( 253617 ) on Wednesday March 09, 2011 @01:21PM (#35432114) Homepage

    "Supercomputing" almost always means "massive Linux deployment and development." I will spare critics the wikipedia link on the subject, but the numbers reported there almost says "Supercomputing is the exclusive domain of Linux now."

    Why am I offended that nVidia would use Linux to do their Supercomputing thing? Because their GPU side copulates Linux users in the posterior orifice. So they can take, take, take from the community and when the community wants something from them, they say "sorry, there's no money in it." We need a revision to the GPL -- that'd shut their Supercomputing project down really fast if there were some sort of verbage that says "if you shun Linux users and at the same time make extensive use of Linux for yourself, you can't use it." I know that would never happen and would probably be a very bad idea for reasons I don't want to consider right now. I just hate that nVidia and their damned Optimus technology serves no purpose but to lock Linux users out of using their own hardware.

  • eh.. (Score:2, Interesting)

    by Anonymous Coward on Wednesday March 09, 2011 @01:42PM (#35432430)

    I've been working with their GPGPU push for a couple of years now. What I notice is they are very good at data parallelism with highly regular data access patterns and very few branches. While they are technically general purpose, they don't perform well on a large portion of high performance tasks that are critical even in scientific computing which are generally compute-bound. This creates some really annoying bottlenecks that simply cannot be resolved. They can give tremendous speedup to a very limited subset of HPC tasks, but others are left in the water, and since these things usually are all coupled into a single code your only choice is to move back and forth between GPU and CPU frequently which initiates a data throughput bottleneck (data transfer from RAM to GPU is very slow).

    On real tasks it is not uncommon to only receive say 2X speedup, where the programmer time involved was increase exponentially. For a lot of my work I'd rather to just do traditional MPI with multiple CPUs.

Credit ... is the only enduring testimonial to man's confidence in man. -- James Blish

Working...