Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Upgrades Virtualization Hardware Technology

NVIDIA CEO Unveils Volta Graphics, Tegra Roadmap, GRID VCA Virtualized Rendering 57

MojoKid writes "NVIDIA CEO Jen-Hsun Huang kicked off this year's GPU Technology Conference with his customary opening keynote. The focus of Jen-Hsun's presentation was on unveiling a new GPU core code named 'Volta' that will employ stacked DRAM for over 1TB/s of memory bandwidth, as well as updates to NVIDIA's Tegra roadmap and a new remote rendering appliance called 'GRID VCA.' On the mobile side, Tegra's next generation 'Logan' architecture will feature a Kepler-based GPU and support CUDA 5 and OpenGL 4.3. Logan will offer up to 3X the compute performance of current solutions and be demoed later this year, with full production starting early next year. For big iron, NVIDIA's GRID VCA (Visual Computing Appliance) is a new 4U system based on NVIDIA GRID remote rendering technologies. The GRID hypervisor supports 16 virtual machines (1 per GPU) and each system will feature 8-Core Xeon CPUs, 192GB or 384GB of RAM, and 4 or 8 GRID boards, each with two Kepler-class GPUs, for up to 16 GPUs per system. Jen-Hsun demo'd a MacBook Pro remotely running a number of applications on GRID, like 3D StudioMax and Solidworks, which aren't even available for Mac OS X natively."
This discussion has been archived. No new comments can be posted.

NVIDIA CEO Unveils Volta Graphics, Tegra Roadmap, GRID VCA Virtualized Rendering

Comments Filter:
  • by Anonymous Coward on Tuesday March 19, 2013 @09:30PM (#43219827)

    Nvidia has had solid success, but the future is looking ever more troubling. The exotic ultra-high end toys that Nvidia promotes (expensive racks of stuff) didn't help keep Sun or Silicon Graphics afloat either.

    Nvidia's important markets are discrete GPUs for desktop and notebook PCs and its ARM SoC tablet/ARMbook parts.

    -The desktop GPUs. Nvidia is held hostage by TSMC's ability to fabricate better chips (on smaller processes). Nvidia itself issued a white-paper where they predicted the costs associated with moving to a new process would soon overwhelm the advantages of staying with the previous process (for high end GPU chips). In fairness, this pessimism was driven by TSMC's horrific incompetence at the 28nm node. Nvidia's talk of a future GPU with exotic stacked DRAM is very troubling indeed, since companies only usually focus on such bizarre idiocy (like holographic optical storage) when traditional solutions are failing them. Building special chips is insanely expensive, especially when you consider that ordinary DRAM is rapidly getting cheaper and faster. As Google proves, commodity hardware solutions beat specialised ones.

    -The mobile PC GPU. Nvidia was forced out of the PC motherboard chipset biz by Intel and AMD. Now Intel and AMD are racing to build APUs (combined CPUs and GPUs) with enough grunt for most mobile PC users. Nvidia chose to start making ARM parts over creating its own x86 CPU, so the APU is not an option for Nvidia. The logic of an OEM choosing to add Nvidia GPUs to mobile devices is declining rapidly. Nvidia can only compete at the ultra-high end. Maybe the stacked DRAM is a play for this market.

    -The Tegra ARM SoC. Tegra has proven a real problem for Nvidia, again because of TSMC's inability to deliver. However, Nvidia also faces a problem over exactly what type of ARM parts are currently needed by the market. Phone parts need to be very low power- something Nvidia struggles to master. Tablet parts need a balance between cost, power and performance- there is no current 'desktop' market outside the Chromebook (yeah, I know that's a notebook). The Chinese ARM SoC companies are coming along at a terrifying pace.

    Nvidia has stated that it will place modern PC GPU cores in the next Tegra (5) although Nvidia frequently uses such terms dishonestly. Logan would be around the end of 2014, and would require Android to have gone fully notebook/desktop by that time to have a decent marketplace for the expensive Tegra 5. Even so, Samsung and Qualcomm would be looking to smash them, and PowerVR is seeking to crush Nvidia's GPU advantage. Nvidia would need a win from someone like Apple, if Apple gives up designing its own chips.

    In the background is the emerging giant, AMD. AMD's past failures mean too many people do not understand the nature of AMD's threat to Intel and Nvidia. AMD has a 100% record of design wins in new forward-thinking products in the PC space. This includes all the new consoles, and the first decent tablets coming from MS later this year. Unlike Nvidia, AMD can make its parts in multiple fabs. AMD also owns the last great x86 CPU core- the Jaguar. AMD is leading the HSA initiative, and can switch to using ARM cores when that proves useful.

    Sane analysis would project a merger between Intel and Nvidia as the best option for both companies, but this has been discussed many times in the past, and failed because Nvidia refuses to 'bend the knee'. Alone, and Nvidia is now far too limited in what it can produce. The server-side cloud rendering products have proven fatal to many a previous company. The high-end scientific supercomputing is a niche that can be exploited, but a niche that would wither Nvidia considerably.

    Shouldn't Nvidia have expected to have become another Qualcomm by now? Even though Nvidia makes few things, it still spreads itself too thin, and focuses on too many bluesky gimmick concepts. 3D glasses, PhysX and Project SHIELD get Nvidia noticed, but then Nvidia seemingly starts to believe its own publicity. It doesn't help that Nvidia is sitting back as the PC market declines - eroding one of the key sources of its income. The excitement is about to be the new consoles from Sony and MS, and Nvidia has no part in this.

  • by viperidaenz ( 2515578 ) on Tuesday March 19, 2013 @10:26PM (#43220155)

    NVidia can't make an x86 CPU/APU/whatever. It took over a decade of court battles between AMD and Intel to settle their shit. They now have a deal where they share each others patents. NVidia has nothing to share, good luck getting a good price on the licenses.

    NVidia was forced out of the chipset market because every new CPU needs a new chipset. It became very expensive for them to keep developing new chips. There's also pretty much nothing left in them too. No memory controller, no integrated video. That's all on the CPU now. Where is the value proposition for an NVidia chipset? They make video hardware. All that is left on a north/south bridge is a bunch of SATA controllers and other peripherals no one really cares about.

    Stacked DRAM isn't actually new. It's known as "Package on Package". The traditional benefits are smaller size and less board space and traces required. The positive side effect is very small electrical paths and the ability to have a lot of them densely packed.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...