Test-Driving NVIDIA's GRID GPU Cloud Computing Platform 29
MojoKid writes: "NVIDIA recently announced that it would offer a free 24-hour test drive of NVIDIA GRID to anyone who wanted to see what the technology could do. It turns out to be pretty impressive. NVIDIA's GRID is a virtual GPU technology that allows for hardware acceleration in a virtual environment. It's designed to run in concert with products from Citrix, VMWare, and Microsoft, and to address some of the weaknesses of these applications. The problem with many conventional Virtual Desktop Interfaces (VDIs) is that they're often either too slow for advanced graphics work or unable to handle 3D workloads at all. Now, with GRID, NVIDIA is claiming that it can offer a vGPU passthrough solution that allows remote users to access a virtualized desktop environment built around a high-end CPU and GPU. The test systems the company is using for these 24-hour test drives all use a GRID K520. That's essentially two GK104 GPUs on a single PCB with 8GB of RAM. The TD program is still in beta, the deployment range is considerable, and the test drives themselves are configured for a 1366x768 display at 30 FPS and a maximum available bandwidth cap of 10Mbit."
Re: (Score:2)
But Keanu can.
Re: (Score:3)
So we're going to run code on GPUs and let CPUs render the graphics?
Re: (Score:2)
It wouldn't surprise me at all. For some specialist workloads, GPUs are amazingly fast. But if you still need a GUI, why not use the built-in graphics processing from the CPU, to avoid any load on the GPU doing the real work?
Re: (Score:2)
I hope the future bottom-of-the-barrel Broadwell GPUs are as powerful as Haswell's Iris.
Re: (Score:2)
yes it can. 320x200x8 at 35fps
OpenStack support? (Score:3)
https://wiki.openstack.org/wik... [openstack.org]
Nice, but expensive (Score:3, Informative)
This isn't particularly new. It's nice tech, but each ~$2000 K1 board supports 4 users. 4. The K2 board supports 2 'power users'. (ref: NVIDIA data sheet: http://www.nvidia.com/content/... [nvidia.com] )
If I cram 4 K1 boards in a server, I can now support 16 virtual desktops with 3D acceleration for an $8k delta over and above the other expenses of VDI.
Unless you ABSOLUTELY MUST have VDI for 3D workloads, I can't see how this makes sense.
Re: (Score:2)
Re: (Score:3)
It's the old thin-client problem, reinvented with virtual machines.
Of course you can run 16 computers/users off one VM, but they will each get 1/16th of the capabilities of the computer, on average.
Sometimes that scales - e.g. small businesses virtualising their half-idle servers. Sometimes it doesn't - e.g. most thin clients when you want to get towards 3D or anything intensive.
But for power users? It almost always doesn't. If you need that kind of power, you have to spend ridiculous amounts of money, o
Re: (Score:1)
This isn't particularly new. It's nice tech, but each ~$2000 K1 board supports 4 users. 4. The K2 board supports 2 'power users'. (ref: NVIDIA data sheet: http://www.nvidia.com/content/... [nvidia.com] )
If I cram 4 K1 boards in a server, I can now support 16 virtual desktops with 3D acceleration for an $8k delta over and above the other expenses of VDI.
Unless you ABSOLUTELY MUST have VDI for 3D workloads, I can't see how this makes sense.
Please read more than a single fucking PDF before speaking in absolutes.
http://www.nvidia.com/object/virtual-gpus.html
Re: (Score:2)
Apparently I was mistaken.
I looked at this tech when it was actually new, around a year ago, and admittedly just pulled the datasheet today to double check my recollection. I'm glad that the limitations are less severe than I thought.
OTOH, nVidia really ought to fix their datasheets, also.
Better now? (and profanity-free to boot!)
Re: (Score:3)
There's 2 big use cases for desktop virtualization. The common one is to run a ton of desktops of off little hardware, with the idea that most people only read emails and use MS Word all day anyway. Big cost saving.
The other is purely to have desktops centralized in a data center so you can have data center admins deal with them instead of needing (as many) on site tech monkeys.
I worked in companies where it was the later. The users still needed 16+ gb of RAM, dedicated powerful hardware, etc, but now if so
Re: (Score:3)
Re: (Score:2)
Will there really be lag? You can try the same tech they sell as a consumer product, game streaming to SteamOS. And the WiiU is similar. With a low input lag LCD monitor and a gigabit network I'm sure the latency would be rather low.
Then you have actual performance advantages running on the server (except a typical Xeon will run about 1GHz slower, save for the really expensive ones). You can do something stupid like 384GB memory on the server, and then data is loaded from terabyte SSDs or fast SAN instead o
Doesn't VMware and Microsoft already support this? (Score:1)
What's the difference between this and Microsoft RemoteFX? I'm pretty sure Citrix also supports hardware 3D virtualization.
Re: (Score:2)
Re: (Score:3)
remotefx blows. it only provides a direct3d interface, there is no opengl support (which is what the majority of the scientific visualization community uses).
Bad Scaling Problem Though (Score:4, Insightful)
Re: (Score:1)
What version of Vmware View? Doesn't the vSGA scaling depend on which 'profile' you use?
See here: http://www.nvidia.com/object/v... [nvidia.com]
That link is referencing a citrix only idea, but their general distinctions between user types is apt. As of View 5.3, there is no longer a lockin to NVIDIA products yet nobody has made any yet to my knowledge. Intel & AMD are on the list to produce something sometime.
So where... (Score:2)
Bring on cloud gaming! (Score:1)
Remote gaming! Bringing the likes of full rez Call of Duty to your pocket device ;)