Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware

NVIDIA's $10K Tesla GPU-Based Personal Supercomputer 236

gupg writes "NVIDIA announced a new category of supercomputers — the Tesla Personal Supercomputer — a 4 TeraFLOPS desktop for under $10,000. This desktop machine has 4 of the Tesla C1060 computing processors. These GPUs have no graphics out and are used only for computing. Each Tesla GPU has 240 cores and delivers about 1 TeraFLOPS single precision and about 80 GigaFLOPS double-precision floating point performance. The CPU + GPU is programmed using C with added keywords using a parallel programming model called CUDA. The CUDA C compiler/development toolchain is free to download. There are tons of applications ported to CUDA including Mathematica, LabView, ANSYS Mechanical, and tons of scientific codes from molecular dynamics, quantum chemistry, and electromagnetics; they're listed on CUDA Zone."
This discussion has been archived. No new comments can be posted.

NVIDIA's $10K Tesla GPU-Based Personal Supercomputer

Comments Filter:
  • Graphics (Score:5, Funny)

    by Anonymous Coward on Sunday November 23, 2008 @05:34AM (#25863373)

    Wow, that's some serious computing power! I wonder if anyone has thought of using these for graphics or rendering? I imagine they could make some killer games, especially with advanced technology like Direct 3D.

    • Re: (Score:2, Funny)

      by GigaplexNZ ( 1233886 )

      I wonder if anyone has thought of using these for graphics or rendering?

      These are effectively just NVIDIA GT280 chips with the ports removed. Their heritage is gaming.

      I imagine they could make some killer games

      If you can find some way to get the video out to a monitor... but then you effectively just have Quad SLI GT280.

      especially with advanced technology like Direct 3D

      Uh... what? Direct 3D has been commonly used for years, you make it sound like some new and exotic technology. It is also effectively Windows only, whereas this hardware is more likely to use something like Linux.

      • by account_deleted ( 4530225 ) * on Sunday November 23, 2008 @06:29AM (#25863591)
        Comment removed based on user account deletion
      • Re:Graphics (Score:5, Funny)

        by Gnavpot ( 708731 ) on Sunday November 23, 2008 @07:05AM (#25863697)

        "I wonder if anyone has thought of using these for graphics or rendering?"

        These are effectively just NVIDIA GT280 chips with the ports removed. Their heritage is gaming.

        We need a "+1 Whoosh" moderation option.

        No, I do not mean "-1 Whoosh". I want to see those embarrassingly stupid postings. But perhaps this moderation option should subtract karma.

        • Re:Graphics (Score:5, Funny)

          by GigaplexNZ ( 1233886 ) on Sunday November 23, 2008 @08:15AM (#25863909)
          I suppose I'm one of those guys now. Hook, line and sinker.
        • Re: (Score:2, Interesting)

          by xonar ( 1069832 )
          So being naive to the ways of the world is bad karma now? I thought Buddhism stressed being free from the material things of the world.
        • by aj50 ( 789101 )

          I'd suggest -1 since that's the most likely preference.

          It doesn't really matter which it is as you can add a modifier for each of the moderation types in your preferences (should you dislike reading funny posts or enjoy a good bit of flamebait.)

      • If you can find some way to get the video out to a monitor

        Yup, time to break out those ol' CGA monitors out from the garage...knew they'd come in handy again one day, and with Linux' oh-so-retro CLI mode, I'm set!

      • Re: (Score:3, Informative)

        by evilbessie ( 873633 )

        In much the same way that the current Quadro FX cards are based on the same chip as the gaming gforce cards. But still the most expensive gaming card is ~£400, but you'll pay ~£1500 for the top of the line FX5700.

        It's because workstation graphics cards are configured for accuracy above all else, where as gaming cards are configured for speed. Having a few pixels being wrong does not affect gaming at all, getting the numbers wrong in simulations is going to cause problems.

        Mostly the people who us

  • Heartening... (Score:3, Interesting)

    by blind biker ( 1066130 ) on Sunday November 23, 2008 @05:37AM (#25863381) Journal

    ...to see a company established in a certain market, to branch out so aggressively and boldly into something... well, completely new, really.

    Does anyone know if Comsol Multiphysics can be ported to CUDA?

  • 4 TFLOPS? (Score:5, Insightful)

    by Anonymous Coward on Sunday November 23, 2008 @05:38AM (#25863385)

    A single Radeon 4870x2 is 2.4 TFLOPS. Some supercomputer, that.

    Seriously, why is this even news? nVidia makes a product, which is OK, but nothing revolutionary. The devaluation of the "supercomputer" term is appalling.

    Also, how much of that 4 TFLOPS you can get on actual applications? How's FFT? Or LINPACK?

    • Re:4 TFLOPS? (Score:5, Informative)

      by GigaplexNZ ( 1233886 ) on Sunday November 23, 2008 @06:30AM (#25863599)

      A single Radeon 4870x2 is 2.4 TFLOPS.

      A single Radeon 4870x2 uses two chips. This Tesla thing uses 4 chips that are comparable to the Radeon ones. It should be obvious that they would be in a similar ballpark.

      Seriously, why is this even news?

      It isn't. Tesla was released a while ago, this is just a slashvertisement.

      • Re: (Score:2, Interesting)

        by X-acious ( 1079841 )

        A single Radeon 4870x2 uses two chips

        2.4 / 2 = 1.2

        Each Tesla GPU has 240 cores and delivers about 1 TeraFLOPS single precision...

        Each Radeon HD 4870 produces 1.2 TFLOPS, about 0.2 more than one Tesla GPU.

        "NVIDIA announced...the Tesla Personal Supercomputer -- a 4 TeraFLOPS desktop...

        Two 4870 X2s equal 4.8 TFLOPS, 0.8 more than four Tesla GPUs.

        I think the parent's point was that even when an HD Radeon 4870 X2 is made up of two cards they're still connected and recognized as one. Thus, with "fewer" cards and fewer slots you could achieve more performance. Or you could use the other two vacant slots for yet another two 4870s: Four of them in crossfire would equal 9.6 TFLOPS, 5.6 more than four Tesla GP

      • NVIDIA has done a good job of making the processing power accessible to programmers that are not GPU coding experts. In addition, they have made hardware changes to better support the type of scientific computation being done on these devices.

        So, while in theory you could put together some Radeon's, work with their API and achieve the same thing, NVIDIA has significantly reduced the level of effort to make it happen.
    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Sunday November 23, 2008 @11:41AM (#25864667)
      Comment removed based on user account deletion
    • Depends on the kind of precision you want. Also the big limiting factor in these kinds of apps is actually feeding the GPUs. Y'know that little glxgears test app that everybody uses to test their FPS? The glxgears framerate is actually just the number of times per second that the driver can properly set up the card, prepare a display list, flush it to the card, and then swap the buffers. The card usually can go much faster than that.

      (And, of course, the point is, glxgears is probably the fastest thing that

  • by dgun ( 1056422 ) on Sunday November 23, 2008 @05:48AM (#25863409) Homepage
    What a rip.
  • by dleigh ( 994882 )
    At first glance I thought these used actual Tesla coils [wikipedia.org] in the processor, or the devices were at least powered or cooled by some apparatus that used Tesla coils.

    Turns out "Tesla" is just the name of the product.

    Drat. I demand a refund.
    • They should at least come up with a "mad scientist lab pack" that includes some Tesla coils. Perhaps they presume that mad scientists will have their own gear.

      I just spent an entire morning trying out massive single throw switches.

      "Now, we'll SEE who's mad! [thunk]"

      "Now, we'll see who's MAD! [thunk]"

      In all fairness, these things can be pretty personal.

      • . . . that's probably exactly the person who would buy one of these.

        Folks who are professionally working on mainstream problems that require supercomputers, well, they probably have access to one already. (Maybe one of the supercomputing folks might want to chime in here; do you have enough access/time? Would a baby-supercomputer be useful to you?)

        But there is certainly someone out there who was denied access, because his idea was rejected by peer review. He is considered a loopy nut bag, because he

        • Perhaps there will be a resurgence in mad, unethical experimentation. In 20 years, this computer might acquire a status similar to that of the Altair 8800 home computer kit.

          I still say that 640 human embryos should be enough for anybody.

    • I thought of the car first. I figured that's how much battery you'd need to run it in a laptop.
  • by Anonymous Coward on Sunday November 23, 2008 @05:50AM (#25863421)

    The toolchain is binary only and has an EULA that prohibits reverse engineering.

    • by FireFury03 ( 653718 ) <slashdot@NoSPAm.nexusuk.org> on Sunday November 23, 2008 @06:23AM (#25863561) Homepage

      has an EULA that prohibits reverse engineering.

      Not really a big deal to those of us in the EU since we have a legally guaranteed right to reverse engineer stuff for interoperability purposes.

      • has an EULA that prohibits reverse engineering.

        Not really a big deal to those of us in the EU since we have a legally guaranteed right to reverse engineer stuff for interoperability purposes.

        Don't get cocky. It's only presently guaranteed. Laws change, and there's a whole lot of pressure to make that change.

      • by devman ( 1163205 )
        We do in the US as well, it's listed in the exceptions part of the DMCA and has been part of the U.S. Code for awhile.
    • by JamesP ( 688957 )

      The toolchain is binary only and has an EULA that prohibits reverse engineering.

      Show me a non-free EULA that doesn't.

  • While the inner nerd in me screams to take out a loan against my house to buy one, I can't imagine this being very popular outside academia. Most users don't use the power of their crappy computers, let alone this. And then there is the whole "ECONOMY" thing.
    • Re: (Score:3, Insightful)

      by Yetihehe ( 971185 )
      It IS marketed for academia. Normal users don't really need to fold proteins or simulate nuclear weapons at home.
      • Re: (Score:2, Informative)

        I'm perfectly normal, and I fold proteins all the time [webshots.com].
      • Re: (Score:2, Interesting)

        by Anonymous Coward

        according to http://folding.stanford.edu/English/Stats about 250.000 "normal" users are folding proteins at home.

        Personally, I would use it as a render farm, but Blender compatibility could take a while if Nvidia keeps the drivers and specification locked up.

        What they don't seem to mention is the amount of memory/core (at 960 cores). I'd guess about 32 MB/core, and 240 cores sharing the same memory bus...

        • err, you seem to have missed something fairly major in your understanding. Specifically about what constitutes a 'core'. These cards are based on the same chip in the GT280, so they have 240 stream processors, which are very good at specific types of calculation (If I was wiser I could tell you what types but I'm sure you can use google yourself). I believe that each of the chips has a 512 bit wide bus to 4GiB of memory. I'm not sure what the memory allocation per stream processor is but I think the other p
          • by DrYak ( 748999 )

            but I don't know enough about it to be able to give useful information on the subject.

            I do write some CUDA code, so I'll try to help.

            I believe that each of the chips has a 512 bit wide bus to 4GiB of memory.

            Indeed each physical package has entirely access to its own whole chuck of memory, regardless of who many "cores" the package contains (between 2 for the lowest end laptops GPUs and 16 for the highest end 8/9800 cards. Don't know about GT280. But the summary is wrong 240 is probably the amount of ALUs or the width of the SIMD) and regaless of how many "stream processor" there are (each core has 8 ALUs, which are exposed as 32-wide SIMD processing units, which i

    • Yes but (Score:3, Funny)

      by Colin Smith ( 2679 )

      And then there is the whole "ECONOMY" thing.

      The whole reason the ECONOMY is in the tank is because there are not enough people like you taking loans out against their house to buy random stuff like this.

      Basically... IT'S ALL YOUR FAULT!

       

  • by rdnetto ( 955205 ) on Sunday November 23, 2008 @06:08AM (#25863503)

    4 Terraflops should be more than enough for anybody...

  • Scientist speak (Score:2, Interesting)

    by jnnnnn ( 1079877 )

    So many scientists use the word "codes" when they mean "program(s)".

    Why is this?

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      It's cultural.

      You're not even allowed to say that you're "coding", but only that you produce "codes".

      Maybe it's because analytic science is basic on equations which become algorithms in computing, and you can't say that you're "equationing" nor "algorithming".

      In practice it's actually dishonest, because the algorithms don't have the conceptual power of the equations that they represent (they would if programmed in LISP, but "codes" are mostly written in Fortran and C), so the computations are often question

  • weak DP performance (Score:5, Informative)

    by Henriok ( 6762 ) on Sunday November 23, 2008 @06:53AM (#25863655)
    I supercomputing circles (i.e. Top500.org) double precision floating point operations seems to be what is desired. 4 TFLOPS single precision, while impressive, is overshadowed by the equally weak 80 GFLOPS double precision, beaten by a single PowerXCell 8i (successor to the Cell in PS3) or the latest crop of Xeons. I'm sure tesla will find its users but we won't see them on the Top500 list anytime soon.
    • I'm just amazed that the performance loss from single to double precision is more than a factor of 10! It's only 2x the bits, what's the holdup?
  • by Lazy Jones ( 8403 ) on Sunday November 23, 2008 @06:59AM (#25863679) Homepage Journal
    there were a lot of early efforts trying to implement realtime rayracing engines for games (e.g. at Intel recently [intel.com]), let's port that stuff and have some fun.
    • On that note, it would be a good development platform for realtime raytraced game engines. That way the code would be mature when affordable GPU's come out that can match that level of performance.

  • Is it possible to build a smaller version of this configuration? I do not have 10K, but I can come up with something smaller for my PhD research. In that case, is this a package that can be replicated via off the shelf nvidia hardware, or do I need to wait for NVidia to release a smaller version?

    • by JamesP ( 688957 )

      Well, buy any card that supports CUDA (pretty much all offers by nVidia today - except you probably want to stay off the cheapest stuff)

      You can also try running a PS3 + Linux or try the similar offers from AMD/ATI

      • by Fuzuli ( 135489 )

        Sorry, I should have been clearer. I'm aware of those solutions, but would it be the same in terms of processing power, software support (cuda, related libraries etc..)
        I mean is this a convenient repackaging of what is already out there, or does it have something extra?

        • by SpinyNorman ( 33776 ) on Sunday November 23, 2008 @10:44AM (#25864379)

          From NVidia's CUDA site, most of their regular display cards support CUDA, just with less cores (hence less performance) than the Tesla card. The cores that CUDA uses are what used to be called the vertex shaders on your (NVidia) card. The CUDA API is designed so that your code doesn't know/specify how many cores are going to be used - you just code to the CUDA architecture and at runtime it distrubutes the workload to the available cores... so you can develop for a low end card (or they even have an emulator) then later pay for th hardware/performance you need.

        • Re: (Score:3, Informative)

          by kramulous ( 977841 ) *

          The 10K refers to a rack mount solution containing 4xGPUs. You can still buy a single GPU and try and put it in a standard machine (provided it doesn't melt - I'd read the specs) for about a quarter of the price.

  • Erlang (Score:3, Interesting)

    by Safiire Arrowny ( 596720 ) on Sunday November 23, 2008 @07:25AM (#25863763) Homepage
    So how do you get an Erlang system to run on this?
    • Re: (Score:3, Insightful)

      By writing an Erlang-to-CUDA compiler?

      More seriously though, it is probably not worth even trying, since the GPUs used in the Tesla support a very limited model of parallelism. Shoehorning the flexibility of Erlang into that would at the very leas result in a dramatic performance loss, if it is possible at all.

  • by bsDaemon ( 87307 ) on Sunday November 23, 2008 @07:58AM (#25863841)

    ... AMD has annouced today it new Edison Personal Supercomputer technology.

    The game is on.

  • by Gearoid_Murphy ( 976819 ) on Sunday November 23, 2008 @08:16AM (#25863913)
    it's not about how many cores you have but how efficiently they can be used. If your CUDA application is any way memory intensive you're going to experience a serious drop in performance. A read from the local cache is 100 times faster than a read from the main ram memory. This cache is only 16kb. I spend most of my time figuring out how to minimise data transfers. That said, CUDA is probably the only platform that offers a realistic means for a single machine to tackle problems requiring gargantuan computing resources.
    • by anon mouse-cow-aard ( 443646 ) on Sunday November 23, 2008 @08:52AM (#25864025) Journal
      People are always coming out of the wood work to claim supercomputer performance with such and such a solution, go back and look at GRAPE (which is really cool.) http://arstechnica.com/news.ars/post/20061212-8408.html [arstechnica.com] or a lot of other supercomputer clusters. When you want something flexible, you look for "balance" that means a good relationship between memory capacity, latency & bandwidth, as well as computer power. in terms of memory capacity, the number people talk about is: 1 byte/flop... that is 1 Tbyte of memory is about right to keep 1 TFLOP flexibly useful. this thing has 4 G of memory for 4 TF... in other words: 1 byte / 1000 flops. it's going to be hard to use in a general purpose way.
    • In the paste I was not very impressed by things as http://www-graphics.stanford.edu/projects/brookgpu/ [stanford.edu] because of the latency that is involved in actually transferring data back and forth from CPU to GPU memory. Thus I observed the same thing. But now it seems to the actual latency for transfer is reduced because of PCI-e, one might wonder if decent compiler technology is able to optimise 'normal' code for GPU instructions.
  • Patmos International (Score:3, Interesting)

    by Danzigism ( 881294 ) on Sunday November 23, 2008 @09:57AM (#25864197)
    ahh yes the idea of personal supercomputing. Back in '99 I worked for Patmos International. We were at the Linux Expo for that year as well if some of you might remember. Our dream was to have a parallel supercomputer in everyone's home. We used mostly Lisp and Daisy for the programming aspect. The idea was wonderful, but eventually came to a screeching halt when nothing was being sold. It was ahead of it's time for sure. you can find out a little more about it here. [archive.org] I find the whole ideal of symbolic multiprocessing very fascinating though.
  • Will it run Duke Nukem For... eh, you all know where this is going...
  • Personal supercomputer? Surely it's cool, but how about turning the whole Internet into a supercomputer?

    Make Internet fast enough and equip every node with a network operating system to share its resources with all other nodes. Sounds like a security nightmare, but let's focus on the performance part for now. Every one of us has a CPU, a storage device (eg SSD), and some RAM. But not all of us use all of our CPU, SSD, or RAM at the same time. While I play a game effectivelly making my CPU to work at 100

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...