Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Upgrades Virtualization Hardware Technology

NVIDIA CEO Unveils Volta Graphics, Tegra Roadmap, GRID VCA Virtualized Rendering 57

MojoKid writes "NVIDIA CEO Jen-Hsun Huang kicked off this year's GPU Technology Conference with his customary opening keynote. The focus of Jen-Hsun's presentation was on unveiling a new GPU core code named 'Volta' that will employ stacked DRAM for over 1TB/s of memory bandwidth, as well as updates to NVIDIA's Tegra roadmap and a new remote rendering appliance called 'GRID VCA.' On the mobile side, Tegra's next generation 'Logan' architecture will feature a Kepler-based GPU and support CUDA 5 and OpenGL 4.3. Logan will offer up to 3X the compute performance of current solutions and be demoed later this year, with full production starting early next year. For big iron, NVIDIA's GRID VCA (Visual Computing Appliance) is a new 4U system based on NVIDIA GRID remote rendering technologies. The GRID hypervisor supports 16 virtual machines (1 per GPU) and each system will feature 8-Core Xeon CPUs, 192GB or 384GB of RAM, and 4 or 8 GRID boards, each with two Kepler-class GPUs, for up to 16 GPUs per system. Jen-Hsun demo'd a MacBook Pro remotely running a number of applications on GRID, like 3D StudioMax and Solidworks, which aren't even available for Mac OS X natively."
This discussion has been archived. No new comments can be posted.

NVIDIA CEO Unveils Volta Graphics, Tegra Roadmap, GRID VCA Virtualized Rendering

Comments Filter:
  • This Volta sounds pretty exciting, DRAM bandwidth is commonly a limiting factor in GPGPU applications, so if it can get 1TB/s, it'll be more than 3x faster for memory bound kernels than the current high-end scientific computing cards (e.g. the Tesla K20). With that said, I'm a bit apprehensive about how much it'll cost; Tesla K20's currently cost over $2k per card...
    • by godrik ( 1287354 )

      1TB/s of memory bandwidth is indeed impressive. I am doing quite a bit of memory intensive kernels (graph algorithms) on accelerators (GPU, Xeon Phi). And bandwidth is a significant bottleneck. Kepler did not bring a significant bandwidth improvement over Fermi. Xeon Phi is in the same areas. But 1TB/s seems tremendous. I am impatient putting my hands (or my ssh) on one of these.

      • 1TB/s of memory bandwidth is indeed impressive

        I do not understand why everybody and their great-grandmother's dog are drolling all over and goo-goo--gaa-gaa over the "memory bandwidth" thing

        Even if that rig is dedicated for massive game-playing, what portion of the time the GPGPU needs to tap on the full strength of the 1TB/s memory bandwidth ?

        Furthermore, the average rig wouldn't even use 0.1% percent of its time hitting the 1TB/s threshold

        Which means, 99.9% of the time the GPGPU can get by with lower memory bandwidth requirements

        Remember, 1TB/s of NO

        • by godrik ( 1287354 )

          Well, If you do not need massive bandwidth, you do not. Personnally, I do not use GPUs to do graphic computation for sparse computations (multiplying sparse matrices or traversing graphs). On these computation the main bottleneck is memory bandwidth. So if the memory bandwidth increase by a factor of 3, I will see an immediate improvement of performance by at least 50%, potentially by a factor of three once the kernels are optimized for that new architecture.

        • This is a major improvement for GPGPU, not game playing. Memory throughput is often the bottleneck in applications, as computational throughput improvements has greatly outstripped memory throughput improvements. To give you an idea about the importance of memory bandwidth, if you have a GPU with a peak arithmetic throughput of 1170 GFLOPS (this is how much a Tesla K20 gets for double precision floating point) performing FMA (fused multiply add, so 2 floating point operations for 3 operands), then to sustai

  • Staggering specifications, but maybe several years from now it'll be be commonplace on a $50 smartphone. One year after that it'll be in kerbside hard rubbish collections. Sigh...
  • So we're back to the heavy mainframe and thin client topology now?

  • by Anonymous Coward on Tuesday March 19, 2013 @08:30PM (#43219827)

    Nvidia has had solid success, but the future is looking ever more troubling. The exotic ultra-high end toys that Nvidia promotes (expensive racks of stuff) didn't help keep Sun or Silicon Graphics afloat either.

    Nvidia's important markets are discrete GPUs for desktop and notebook PCs and its ARM SoC tablet/ARMbook parts.

    -The desktop GPUs. Nvidia is held hostage by TSMC's ability to fabricate better chips (on smaller processes). Nvidia itself issued a white-paper where they predicted the costs associated with moving to a new process would soon overwhelm the advantages of staying with the previous process (for high end GPU chips). In fairness, this pessimism was driven by TSMC's horrific incompetence at the 28nm node. Nvidia's talk of a future GPU with exotic stacked DRAM is very troubling indeed, since companies only usually focus on such bizarre idiocy (like holographic optical storage) when traditional solutions are failing them. Building special chips is insanely expensive, especially when you consider that ordinary DRAM is rapidly getting cheaper and faster. As Google proves, commodity hardware solutions beat specialised ones.

    -The mobile PC GPU. Nvidia was forced out of the PC motherboard chipset biz by Intel and AMD. Now Intel and AMD are racing to build APUs (combined CPUs and GPUs) with enough grunt for most mobile PC users. Nvidia chose to start making ARM parts over creating its own x86 CPU, so the APU is not an option for Nvidia. The logic of an OEM choosing to add Nvidia GPUs to mobile devices is declining rapidly. Nvidia can only compete at the ultra-high end. Maybe the stacked DRAM is a play for this market.

    -The Tegra ARM SoC. Tegra has proven a real problem for Nvidia, again because of TSMC's inability to deliver. However, Nvidia also faces a problem over exactly what type of ARM parts are currently needed by the market. Phone parts need to be very low power- something Nvidia struggles to master. Tablet parts need a balance between cost, power and performance- there is no current 'desktop' market outside the Chromebook (yeah, I know that's a notebook). The Chinese ARM SoC companies are coming along at a terrifying pace.

    Nvidia has stated that it will place modern PC GPU cores in the next Tegra (5) although Nvidia frequently uses such terms dishonestly. Logan would be around the end of 2014, and would require Android to have gone fully notebook/desktop by that time to have a decent marketplace for the expensive Tegra 5. Even so, Samsung and Qualcomm would be looking to smash them, and PowerVR is seeking to crush Nvidia's GPU advantage. Nvidia would need a win from someone like Apple, if Apple gives up designing its own chips.

    In the background is the emerging giant, AMD. AMD's past failures mean too many people do not understand the nature of AMD's threat to Intel and Nvidia. AMD has a 100% record of design wins in new forward-thinking products in the PC space. This includes all the new consoles, and the first decent tablets coming from MS later this year. Unlike Nvidia, AMD can make its parts in multiple fabs. AMD also owns the last great x86 CPU core- the Jaguar. AMD is leading the HSA initiative, and can switch to using ARM cores when that proves useful.

    Sane analysis would project a merger between Intel and Nvidia as the best option for both companies, but this has been discussed many times in the past, and failed because Nvidia refuses to 'bend the knee'. Alone, and Nvidia is now far too limited in what it can produce. The server-side cloud rendering products have proven fatal to many a previous company. The high-end scientific supercomputing is a niche that can be exploited, but a niche that would wither Nvidia considerably.

    Shouldn't Nvidia have expected to have become another Qualcomm by now? Even though Nvidia makes few things, it still spreads itself too thin, and focuses on too many bluesky gimmick concepts. 3D glasses, PhysX and Project SHIELD get Nvidia noticed, but then Nvidia seemingly starts to believe its own publicity. It doesn't help that Nvidia is sitting back as the PC market declines - eroding one of the key sources of its income. The excitement is about to be the new consoles from Sony and MS, and Nvidia has no part in this.

    • by viperidaenz ( 2515578 ) on Tuesday March 19, 2013 @09:26PM (#43220155)

      NVidia can't make an x86 CPU/APU/whatever. It took over a decade of court battles between AMD and Intel to settle their shit. They now have a deal where they share each others patents. NVidia has nothing to share, good luck getting a good price on the licenses.

      NVidia was forced out of the chipset market because every new CPU needs a new chipset. It became very expensive for them to keep developing new chips. There's also pretty much nothing left in them too. No memory controller, no integrated video. That's all on the CPU now. Where is the value proposition for an NVidia chipset? They make video hardware. All that is left on a north/south bridge is a bunch of SATA controllers and other peripherals no one really cares about.

      Stacked DRAM isn't actually new. It's known as "Package on Package". The traditional benefits are smaller size and less board space and traces required. The positive side effect is very small electrical paths and the ability to have a lot of them densely packed.

      • Comment removed based on user account deletion
        • However i do recall a developement from a few years back that effectivly placed something like heatpipes inside the layers of the chips allowing it to be pushed out to the surface of the chip. TBH I wonder what level of fragility our CPUs are running on...

        • What you were decribing sounds like what Intel did to produce a high bandwidwith chip (the Pentium 4) when their Pentium 3 failed to scale against the origonal AMD Athlon . That would seem to indicate they finally hit the wall CPUs ran into 10 years ago..

      • by rsmith-mac ( 639075 ) on Wednesday March 20, 2013 @01:04AM (#43221155)

        Respectfully, I don't know why this was modded up. There's a lot of bad information in here.

        On the one hand, you're right that NVIDIA can't get into the x86 CPU market. Intel controls that lock and key. Though NVIDIA does have things to share (they have a lot of important graphics IP), but it wouldn't be enough to get Intel to part with an x86 license (NVIDIA has tried that before).

        However you're completely off base on the rest. Cost has nothing to do with why NVIDIA is out of the Intel chipset business. NVIDIA's chipset business was profitable to the very end. The problem was that on the Intel side of things NVIDIA only had a license for the AGTL+ front side bus, but not the newer DMI or QPI buses [arstechnica.com] that Intel started using with the Nehalem generation of CPUs. Without a license for those buses, NVIDIA couldn't make chipsets for newer Intel CPUs, and that effectively ended their chipset business (AMD's meager x86 sales were not enough to sustain a 3rd party business).

        NVIDIA and Intel actually went to court over that and more; Intel eventually settled by giving NVIDIA over a billion dollars. You are right though that there's not much to chipsets these days, and if NVIDIA was still in the business they likely would have exited it with Sandy Bridge.

        As for Stacked DRAM. That is very, very different from PoP RAM. PoP uses traditional BGA balls to connect DRAM to a controller [wikimedia.org], with the contacts for the RAM being along the outside rim of the organic substrate that holds the controller proper. Stacked DRAM uses through silicon vias: they're literally going straight down/up through layer of silicon to make the connection. The difference besides the massive gulf in manufacturing difficulty is that PoP doesn't lend itself to wide memory buses (you have all those solder balls and need space on the rim of the controller for them) while stacked DRAM will allow for wide memory buses since you can connect directly to the controller. The end result in both cases is that the RAM is on the same package as the controller, but their respective complexity and performance is massively different.

        • My bad, stacked DRAM isn't PoP, its that thing Intel and Micron did in 2011 and called it Hybrid Memory Cube with the prototype getting 1Tbps.

      • NVidia can't make an x86 CPU/APU/whatever. It took over a decade of court battles between AMD and Intel to settle their shit. They now have a deal where they share each others patents. NVidia has nothing to share, good luck getting a good price on the licenses.

        NVidia could buy VIA...

        • VIA can't compete with AMD, let alone Intel.
          They were good at the low power end around 10 years ago but now even lag behind there.

          • It was asserted that NVidia needs patent licenses to build x86 CPUs. VIA builds x86 CPUs; therefore VIA must have the patent licenses. If NVidia bought VIA, then NVidia would have the patent licenses and be able to build x86 CPUs.

            NVidia would still have to catch up, but competing would at least become legally possible.

            (Unless VIA only has licenses for old x86 technology, which would explain why they've lagged so far behind...)

            • VIA has access to all the technology, they implements SSSE3, SSE4.1 and x86-64 in their latest quad core processors. I think the problem is it's not exactly easy to rival the performance of Intel CPUs or even AMD ones. None of their chips have ever gone above 2GHz.

              VIA has some cross licensing with Intel and have an agreement that is about to lapse. They don't make chip sets for Intel any more because the agreement in 2003 only gave them those patents for 4 years. They also need to pay Intel royalties as wel

    • by tyrione ( 134248 )
      Agreed on all points about AMD. Their HSA initiative and the direction of GSN with FX and GPGPU designs, to their APU tieing them together, while embracing ARM 64 hybrids makes their future enormous.
    • There's also the point of diminishing returns from the consumer side. I upgraded my video card for Christmas. The bottleneck for my PC's performance is not my video card, and it probably won't be until my system is ready to be completely redone again in three years. It used to be that when the video card was the limiting factor for better performance in games, you had an incentive to upgrade on a year basis. Now, I'd need a new motherboard and processor to improve the performance of my games, because the
      • I think high density displays (4K+) are coming, and that will need a lot more GPU horsepower... like 4X the horsepower from a GPU.
      • by Tynin ( 634655 )

        There's also the point of diminishing returns from the consumer side...

        I'm afraid that you, Sir, are discounting the electrically priced out hordes of BitCoin miners that would love to see more shader/stream processors added to there GPUs at all cost, in such an enormous quantity that they would forever yield an efficient stream of never-ending currency! The ASIC invasion must be met with swift and decisive victories in the GPU market! So say'th the Poor Hashers of Satoshi Nakamoto...

        In The Block, We Trust.

    • In the background is the emerging giant, AMD. AMD's past failures mean too many people do not understand the nature of AMD's threat to Intel and Nvidia. AMD has a 100% record of design wins in new forward-thinking products in the PC space.

      Hrm, while I agree with a good deal of the rest of your post, how does this manage to not include the bulldozer architecture? As a largely AMD customer myself, I'm not sure I can bring myself to call that a "design win".

    • Stacked DRAM is not cold fusion or holographic storage or flying cars.
      What they've announced is similar to Intel Haswell GT3e, which is a real product that runs today, awaiting commercial launch. "Silicon Interposer" or "2.5D stacking" are maybe a more useful term.
      It will become an industry standard, the memory bandwith wall can't be written away like you do. AMD APUs are really crippled by their bandwith for instance, and using quad channel or gddr5 as system memory is an expensive proposition.

    • This. Agree with everything.

      Was going to mention consoles, but you did. For fun you could have linked the slashdot story the other week about nVidia "turning down" PS4 development.

      nVidia make great GPU, no doubt about that. However the future looks a bit grim when you start looking at the larger picture and all the challenges and forces arrayed against nVidia.

      The world is full of companies that make great product that fail anyway due to other factors. A relevant example is 3dfx. I had a Voodoo3 3000 16MB ba

  • Pinging Google is 20ms from my home computer. I can see how it might be possible in twenty years with a fiber optic connection but not in five years. And certainly not on a cell phone network. I can imagine certain programming techniques but I'm sure most of them are already implemented just to get lag down on a standalone computer and the rest would require games to be designed and programmed for high feedback lag. Some of the techniques I can imagine would trade more bandwidth to make up for the lag.

    Expla

    • by Luckyo ( 1726890 )

      The idea seems to be more of "nvidia shield" style of remote rendering for most cases I think. Your powerful nvidia-based home PC renders the game and you can play it anywhere within your house over ethernet at latencies of 1-2ms.

      The GRID solution for a lot of virtual systems could be used in netcafes and big tournaments I suppose. I agree that it's hard to imagine remote gaming at all (not just near future) simply because latency cannot be pushed low enough once you leave the immediate vicinity in terms of

  • Nvidia ditching whatever embedded GPU Tegra has parallels with Intel dropping PowerVR for their latest Bay Trail Atom.

    I wonder if this means the nouveau driver will be compatible with one's ARM tablet. If so, Canonical's convoluted architecture for Mir (embracing android blobs) might be shortlived - with lima, freedreno, nouveau, intel all targeting Xorg/Wayland - leaving PowerVR solutions as the odd one out.

  • Good for saving space. Good for making speeding things up. Bad for heat dissipation.

    How can I increase the thermal resistance of my processor.... I know, stick a DRAM chip between it and the heat sink!

    • by Anonymous Coward

      the DRAM stack is undoubtedly along side the GPU die, on a silicon interposer (for fine-pitch/high-density routing) most likely. a 2.5D solution, as someone else mentioned... for high end and high-power parts, the DRAMs are not going to be between the GPU die and heat sink...

  • If nVidia didn't already have the de-facto standard for professional 3D graphics, they sure do now.
  • Is there somewhere a technical overview of what GRID is? Wikipedia seems unaware of it.
  • So my choices of video card are between slow and expensive (intel integrated, intel CPUs cost more than AMD CPUs if you don't care about single-thread performance, and I don't, and the motherboards cost much more as well) or crap drivers (AMD) or evil lock-in (nVidia).

    All I want for Christmas is a graphics card company I can buy from without feeling like an asshole.

  • by should_be_linear ( 779431 ) on Wednesday March 20, 2013 @07:11AM (#43222421)
    1 TB/s? That bus is insane, it could transfer my whole porn collection in less then 5 minutes!
  • They will not support 3 displays on the newer nvidia-cards, that's a windows-only feature, unless you buy the quattro cards. Never buying nvidia-again.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...