Nvidia's RealityServer 3.0 Demonstrated 91
robotsrule writes "As we discussed last month, RealityServer 3.0 is Nvidia's attempt to bring photo-realistic 3D images to any Internet-connected device, including the likes of Android and iPhone. RealityServer 3.0 pushes the CPU-killing 3D rendering process to a high-power, GPU based, back-end server farm based on Nvidia's Tesla or Quadro architectures. The resulting images are then streamed back to the client device in seconds; such images would normally take hours to compute even on a high-end unassisted workstation. Extreme Tech has up an article containing an interview with product managers from Nvidia and Mental Images, whose iray application is employed in a two-minute video demonstration of near-real-time ray-traced rendering." Once you get to the Extreme Tech site, going to the printable version will help to preserve sanity.
First post? Nnope - article is a dupe. (Score:2)
You don't - the person who got first post is here [slashdot.org] - the article is a dupe from 2 days ago.
I know ... even the editors don't read the fine articles ...
Re: (Score:3, Funny)
NVidia make shit, their drivers are horrible.
Since I don't live in an area where lots of NVidia employees are driving around, I don't care too much about their driving skills :-)
Re: (Score:2)
NVidia make shit, their drivers are horrible.
That wouldn't be our problem, since they are going to do the rendering on their own hardware somewhere in Bespin, ainnit?
I think a smashing app for this would be some sort of World of Warcrack or similar game where they could have a combo of locally rendered/remotely served graphics. Some of the newer high-end mobiles have pretty strong GPUs considering, perhaps use them to render characters and the fast/dynamically changing content and serve backgrounds and foes from this reality server. There may be hundr
Hours and hours (Score:3, Funny)
such images would normally take hours to compute even on a high-end unassisted workstation
Now, they take hours to download over your GSM network.
Re: (Score:3, Insightful)
Re:Hours and hours (Score:5, Informative)
Whatever.
I used to do some raytracing stuff with POV under MS-DOS back in the day, on hardware far slower than the 6-year-old Palm Zire that I recently retired. Nowadays, the iPhone/droid/whatever is way faster.
Was it slow? Of course. But it was nowhere near "months." Long hours, or days -- yes. Not months. Nowhere near. Especially if I were targeting something the size of a modern mobile screen, instead of the fairly high-resolution stuff I was interested in back then.
[I already moderated this article, and posting will undo all of that. Oh, well -- that's the bane of the lack of the -1, Disagree moderation . . .]
Re: (Score:3, Interesting)
BTW it took weta 4hrs per frame to render... likely not on a cellphone.
Re: (Score:3, Interesting)
Maybe if you are trying to render an MMO, a single render farm can do less work in total than all the clients rendering from their own POV.
Re: (Score:2)
The render farm still has to render all the clients from their own POV. The only thing that's less in this case is the GPU and RAM requirements on the clients. But, assuming H.264 at 1024kbps vs a few kbps to exchange only the players position data, all of their bandwidth requirements just increased by a few hundred times.
I still say that ISP monthly limits and latency won't make this quite usable for games, but it could do wonders for a LAN game server.
Re: (Score:2)
If we are talking about ray tracing then the clients can share the path for rays from the light sources through reflections off static objects.
Re:Hours and hours (Score:4, Funny)
I'm thinking rendering a birds eye shot in LOTR would have taken a damn long time on a phone...
How come ? Did LOTR feature birds with unusually complex irises ? For, say, most eagles, a yellow disc, a black disk and you're done. Takes milliseconds.
Granted, the rest of the bird might take a bit longer.
Re: (Score:2)
Takes milliseconds and doesn't look realistic in the slightest.
Re: (Score:1, Interesting)
You should have tried rendering something other than the simple POV-Ray sphere tutorials. I used to use POV-Ray in MS-DOS and some of my more complex scenes (ie. a model of the solar system with space stations and starships) took weeks to months to render on a 486DX2-66. Take one of those scenes, multiply the details and polygons by a factor of 100, then have it render at a minimum of 60 frames per second.
So yeah, you're off by quite a bit there.
Re: (Score:3, Interesting)
Did your computer have an FPU? You cellphone doesn't, so despite it's 200+MHz(?) clock, you'll be lucky to get much past 10MFLOP/s, especially since the library code may often miss the cache (it's pretty limited on ARM). Also, POV scenes frequently use parametric surfaces, rather than meshes, making calculations easier and much less memory intensive than the high-poly meshes used in the demo scenes.
So, maybe a month may be a bit long, but I don't really think that it'd be able to do much better than a wee
Re: (Score:2)
No.
I'm talking 386SX class hardware, here. Sure, I overclocked it from 33 to 40MHz, but it was still just a 386SX. With no memory cache. And no FPU. And a 16-bit bus. And 2 megabytes of RAM. (And a bunky DMA controller, but povray never seemed to care much about that.)
It was years after that before I got to bask in the glory of a Pentium-class machine.
(Why did you reply to me, anyway? It's just an anecdote. And like most other anecdotes that come from someone else's personal experience: No matter w
Re: (Score:3, Insightful)
A car model will fill about 4GB of RAM while rendering. Does your phone have 4 GB of nice highspeed RAM? Nope? Ok you'll be swapping to slow memory. It takes a modern quad core with 8GB of RAM about let's say 5 hours to render a 1200x1200 rendering. Mobile screens are a quarter that so 5/16 = ~20 minutes. Now let's say a mobile phone now a days is about 1/100th the speed of our quad core 8GB modern system. Even generously giving it only a 1/100th speed hit which is probably 10x-100x off , you're looki
Re: (Score:2)
And was POV running a global illumination algorithm rather than just vanilla ray tracing? Because the difference in complexity between the two approaches would mean days or months on that cellphone, but it allows for the dynamic lighting changes shown in the video. Last time I saw somebody doing a similar quality of rendering to the demo images they were using Radiance on a relatively modern workstation. Each frame took several hours, on a phone (if you actually had the memory available) the scaling would b
Re: (Score:2)
Bear in mind that this is not raytracing. NVidia's backend is server obviously using a path tracing algorithm based on the videos; the images start "grainy" and then clear up as they are streamed. Path tracing works like ray tracing with a huge sampling rate, shooting perhaps 30 rays per pixel. Moreover, whereas ray tracers only have to compute rays recursively when they strike a reflective/refractive surface, path tracers always recurse, usually around 5-10 times, for each of the 30 rays per pixel. (T
Re: (Score:2)
they usefulness to raytrace an image on your cellphone...0. Pretty small difference.
Re: (Score:3, Insightful)
Re: (Score:3, Interesting)
bandwidth issues that will continue even into 4g and othwerwise, it has uses, just not mobile. I agree there may be some PC use - this general idea was not unexpected. With adding graphics support to mainstream virtualization this is somewhat of the next step.
Re: (Score:2)
Whatever happened to Hypercosm? They weren't a cloud distributed processing engine.. they were high quality rendering via web done efficiently. It seems getting Hypercosm ported to today's mobile devices would be more productive now than streaming pre-rendered images over the tight pipe.
Re: (Score:2)
Re: (Score:2)
Re:Hours and hours (Score:5, Informative)
Better demo of the capabilities here:
http://www.youtube.com/watch?v=atcIv1K_gVI&feature=related [youtube.com]
Re: (Score:1)
Re: (Score:2)
How long until Barswf or Pyrit is ported onto that cloud? :D
Yay! (Score:2)
Alright, now I can play Doom 3 on my Razr cell phone! 0.2 FPS here I come!
Re:Yay! (Score:4, Funny)
Don't forget the six minute ping time!
Didn't we see this slashvertisement before (Score:5, Informative)
Re: (Score:2)
Sure, until one of the posts decides to sleep with his buddy's wife...
Better to just edit it on a computer (Score:1)
Re: (Score:2)
In real time off the top of my head:
-Allow the client to see their project from any viewport.
-Walk throughs.
-Scripted back-end for web apps. There's already a program running on RealityServer 2.0 which lets you build a room of your house and then place furniture and see a rendering of what your house would look like. Then let's you buy it. Imagine Ikea's catalogue letting you not just shop but actually visualize your house and then just order and have it ready for pickup when you arrive on a cart. Just
Re: (Score:2)
The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time. I just don't get it, could someone enlighten me?
They can only do that ahead of time if they're the ones making the aesthetic decisions. If I wanted to show the director of a movie an environment and get his feedback, I could make the changes right there for him to see and get the OK right away.
Re: (Score:3, Interesting)
Re: (Score:2)
But you wouldn't really need even SD, much less HD for that, would you?
Yes, you would.
From what I've seen they use pretty crude story boards or very basic computer animation just to get the fell for it, and then after everything is approved go full res.
No, this is not true. This is why they hire concept illustrators. In fact, most of the concept paintings end up being a lot higher res than the film itself. As technology progresses, the quality of the illustrations and the pre-viz improves as well. Trust me, the more that can be done to do things like speed up rendering, the more of it you'll see before it hits post.
Re: (Score:2)
Maybe they could use augmented reality application that could allow the user to take a video or photograph of a scene and be able to add virtual geometry or textures (add tables, change curtain/carpet/sofa textures. There already exists software to do this, but that requires an artist to mark out the borders of the texture. The new scene could be rendered and sent back down to the device.
Bad name, no buzz? (Score:1)
Re: (Score:3, Insightful)
This is Old Technology (Score:5, Funny)
Re: (Score:2)
On a side note, I had the idea last night of disposing with curtains completely and having all of my windows coated with e-Ink style technology to make them as opaque or translucent as I required. I wonder if anybody does th
One question: Why? (Score:5, Insightful)
Summit, in TFA, goes on at different points about a car application -- ie, a system that one might use to preview and/or order new cars. Pick your wheels, your paint, your trim, your seats, and get a few views of the thing in short order*.
All I can think is that if it were really so important for Ford to give you a raytraced view of the car you're ordering, that the options are so limited that all of them could easily be pre-rendered and send all together. How big are a few dozen JPEGs, anyway?
Even if a few dozen JPEGs isn't enough: Don't we do this already with car manufacturer websites, using little more than bog-standard HTML and a whole bunch of prerendered images? In what way would having this stuff be rendered in real-time be any more advantageous than doing it in advance?
Do we really need some manner of fancy client-server process, with some badass cloud architecture behind it, when at the end of the day, we're only going to be shown artificat-filled progressive-JPEG still frames with a finite number of possibilities?
Everyone, please, go look at the demo video. Neat stuff, I guess, but it's boring. Office with blinds open; same office, blinds partly open. Then, closed. Office at night. Different angle. Woo. It's simple math to figure out how many options there are, and it's just as simple to see that it's easier, cheaper, and better to just go ahead and render ALL of them in advance and be done with it and just serve out static images from then on out.
If I'm really missing the point here (and I hope I am), would someone please enlighten me as to how this might actually, you know, solve a problem?
*: Just like a lot of auto manufacturer's websites already do TODAY, using only HTML, static images, and a sprinkling of javascript or (less often) flash.
Re: (Score:1)
Re: (Score:2, Insightful)
dude, think of the porno
Re: (Score:2)
If they start increasing the number of options, a la the Scion brand, then that quickly becomes impractical or impossible. Far easier to render and cache temporarily than to store all possible renders.
Re: (Score:2)
It's not strictly binary.
For instance: In what ways does the color of the paint influence the design of the wheels? Oh, right: It doesn't. How about the interior? Right, sure. A wing? Woo. A trim package? Oh, my. The wheels are still the same.
It's not a pizza. It's a car.
There's just not that many variations on a vehicle which have any impact on more than a couple of parts. But, if you think that it is unachievable to prerender these, please go look at Scion's current website, build a car, and w
Re: (Score:2)
Um.
The wheels don't reflect the paint directly. And cars, as a rule, are displayed on a neutral background.
Next?
Re: (Score:2)
They don't reflect paint but they do reflect the environment. If you only want to show off your car in a white world... but what if you want to give your customers options for the background?
Re: (Score:2)
Yes, you could upload a 3D model of your house's garage and see what your car looks like in there !
Or a 3D model of your SO and see what she/he looks like inside the car !
Or 3D models of your children and see how tall or fat they can become before you have to switch to a roomier model !
Just think of the possibilities !
Re: (Score:2)
Even if one has close to a million dependent options one could still save money by rendering it's not like you have to manually enter the different configurations.
No but you do have to render off every variation. And take something like a body kit. What if they want a spoiler. Those are generally painted the same as the car color. Now you have to render off the spoiler 10 times from 60 different angles. What if they want different side view mirrors, again another 60+ angles. Now what if they want the side mirrors but a different interior. The side mirrors will now have a different sillouete and will have to be composited seperately over the interior, but whe
Re: (Score:2)
There's just not that many variations on a vehicle which have any impact on more than a couple of parts. But, if you think that it is unachievable to prerender these, please go look at Scion's current website, build a car, and write back. (Note: I haven't been there in years, myself, but I'm confident enough in my theory that I'm willing to let you to prove yourself wrong.)
... Let's see Oh I need to install a plugin to build my car... fine.. I wonder what it's for--Oh hey look at that! It's a little crappy real-time 3D renderer! Hahaha. There you go. Your very first example... uses a client side renderer.
It works but the reflections and lighting is all baked onto the car. Which is to say it looks worse than pretty much any video game made in the last 10 years... but it does employ multi-sample AA.
This site does remind me of a few things though: if you want a 360 of the ca
Re: (Score:2, Insightful)
OK, one more reason: 3D Work at home. I do that (as an amateur) and sometimes even my pretty fast machine takes hours at a time to render some scenes. I could as well send the file to RealityServer 3.0 and then render my scenes faster via a web browser, without having to wait hours and hours. That would be great for several reasons:
1. While I wait for my machine to render a scene, I do other things and more than often I ask myself what the hell was that thing that I awas trying to accom
Re: (Score:2)
Speaking from experience... it's currently a HUGE PITA.
Sure if you have just a side view and a front view it's easy. Render out each wheel seperately. But then what if you want a 360 view of the car now? Ooops. No dice. And what if you want the car color to be reflected in the side view mirrors? All the possible combinations? Well if you give the user complete freedom that means there is an infinite number of renderings you have to do. What if you want to see the car at night? Now you have to doub
Re: (Score:1, Interesting)
Imagine Street View rendered in the direction you are holding your phone, from your position. With all the goodies that that 3D map that someone was building a while back (and sure could be ongoing) plus a live application of the algorithm from Canoma and similar applications, you could have a pretty interesting "virtual" world. Another benefit would be that while using the application, you could be aiding the mapping backend with live GPS to refine the map and the 3D model on top of
Re: (Score:1)
"Imagine Street View rendered in the direction you are holding your phone, from your position."
Congratulations, you've invented AR, which has been an app on my phone for about a year now. It's called using the input from the goddamn camera stuck on the front.
Too specific (Score:3, Insightful)
The uses are probably not yet understood. This is cool technology and some of the tens of millions of developers will find good use for it. The interesting bit is that you gain access to a huge render farm without buying a lot of servers. If your load is uneven, this service will save you a lot of money (and power too).
Anyhow, from the top of my head: Cars, architecture, city planning, visualizing climate change, next-generation GPS navigation devices.
Re: City planning/satnav (Score:2)
Where would they get all the hi-resolution, fully textured, up to date city-wide 3D models from?
Because unless they have those models this is moot ... you can do a far better job using static images like Google streetview does (and which an iPhone is perfectly capable of rendering in real time).
Concept is kinda cool, but... (Score:2)
The concept is kinda cool but their demo could have been easily faked. It isn't convincing until I can wander around the room on demand while tweaking the environment.
As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.
(useless in the remote access sense, not necessarily useless in a studio environment for architecture or vehicle modelling; although those needs can be met with a rendered video sequence anyway.
Re: (Score:2)
As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.
Maybe the rendering cost scales non-linearly.
Re: (Score:2)
I know slashdot is keen on saying "The Cloud" is a buzzword and meaningless bullshit. But that's because Slashdot is evidently completely clueless to what cloud computing really means. What it means in this case is you pay for the processing you need. You don't buy a $15k server. You pay Amazon or Google or some other cloud processor for render time. If you need 3 seconds of rendering then they charge you 3c for the trouble.
Re: (Score:1)
How do VHS wood screws differ from Betamax wood screws?
Good for VR (Score:4, Interesting)
This is a great advancement for high end virtual reality systems, but the current state of "rendering in the cloud" sounds like either a solution looking for a problem or the wrong application of the technology.
On a future Internet with sub 30 ms latency, this would ROCK. [You could have low-powered wearable augmented reality devices, "Rainbows End" style gaming, and maybe even the engine behind a Snow Crash style metaverse that remote users can log in to].
NVidia is NOT doing itself a favor with the lame empty office with boring blinds demo. They'd better come up with something sexier quick if they want to sell this (and I don't mean the remote avatar someone posted a link to).
This reminds me of the "thin client" hype circa 1999. "Thin clients" exist now in the form of AJAX enabled web browsers, Netbooks, phones etc, but that technology took about a decade to come to fruition and found a different (and more limited) niche than all the hype a decade ago [they were supposed to replace worker's PCs for word processing, spreadsheets, etc].
Re: (Score:1)
On a future Internet with sub 30 ms latency, this would ROCK.
Considering the maximum distance attached to even theoretically reaching that 30ms threshold...there'd have to be a lot of these farms ;-)
Hello, New Urbanism! (Score:2)
This is what will finally drive people back out of the suburbs and into the cities -- the quest for ever-shorter ping times. "Dood, by the time your packets climb into their minivan and make their way through a hundred microseconds of suburban traffic, my packets have walked 8800ns to the hub and pwned you ten times over!"
Re: (Score:2)
[they were supposed to replace worker's PCs for word processing, spreadsheets, etc].
Um they have for a large portion of the working populace. The last two companies I've worked at use Google docs.
LS
Resident Evil (Score:2)
Still... (Score:1)
Still no cure for cancer :(
You haven't thought your cunning plan through. (Score:2)
So, you're going to prepare high-quality images in response to requests from mobile devices. Your "cloud", a vast farm of massively powerful rendering engines, will prepare these images thousands of times more quickly than your iPhone's pathetic processor, and stream them back to your display. Neato.
Now, since this works so well, millions of mobile users will flock to the service. Thousands at a time will be requesting images. Fortunately, that render farm is still thousands of times faster than a mobil
Re: (Score:2)
They could be caching the rendered images. "Location: x, looking in direction: y" does not have to be rendered more than once.
Ohhh, that's what it's for (Score:2)
I, like many of you here was wondering what the hell this could possibly be useful for, up until I viewed the video.
The answer, clearly, is porn.
It all makes sense now!
Impressive, but... (Score:1)
Progressive refinement, not progressive JPEGs (Score:1)
What you're seeing is progressive refinement, which is a raytracing rendering technique that starts to show an image immediately and continuously adds detail (rather than rendering the image in full detail immediately). The light and dark splotches you initially see are a typical artifact of low-detail radiosity rendering.
More information h [google.com]
Suggested this years ago (Score:2)
FWIW I suggested rendering and compositing multiple video streams into a single one then download to a local mobile terminal a number of years ago. I guess you just wait until you get good enough hardware and then when you hit the sweet spot everything just materializes.