Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Image

Nvidia's RealityServer 3.0 Demonstrated 91

robotsrule writes "As we discussed last month, RealityServer 3.0 is Nvidia's attempt to bring photo-realistic 3D images to any Internet-connected device, including the likes of Android and iPhone. RealityServer 3.0 pushes the CPU-killing 3D rendering process to a high-power, GPU based, back-end server farm based on Nvidia's Tesla or Quadro architectures. The resulting images are then streamed back to the client device in seconds; such images would normally take hours to compute even on a high-end unassisted workstation. Extreme Tech has up an article containing an interview with product managers from Nvidia and Mental Images, whose iray application is employed in a two-minute video demonstration of near-real-time ray-traced rendering." Once you get to the Extreme Tech site, going to the printable version will help to preserve sanity.
This discussion has been archived. No new comments can be posted.

Nvidia's RealityServer 3.0 Demonstrated

Comments Filter:
  • by Anonymous Coward on Sunday November 15, 2009 @11:41PM (#30112016)

    such images would normally take hours to compute even on a high-end unassisted workstation

    Now, they take hours to download over your GSM network.

    • Re: (Score:3, Insightful)

      by Idiomatick ( 976696 )
      Seconds to minutes over gsm. The time to raytrace an image on your cellphone... several months... Pretty big difference.
      • Re:Hours and hours (Score:5, Informative)

        by adolf ( 21054 ) <flodadolf@gmail.com> on Monday November 16, 2009 @12:23AM (#30112218) Journal

        Whatever.

        I used to do some raytracing stuff with POV under MS-DOS back in the day, on hardware far slower than the 6-year-old Palm Zire that I recently retired. Nowadays, the iPhone/droid/whatever is way faster.

        Was it slow? Of course. But it was nowhere near "months." Long hours, or days -- yes. Not months. Nowhere near. Especially if I were targeting something the size of a modern mobile screen, instead of the fairly high-resolution stuff I was interested in back then.

        [I already moderated this article, and posting will undo all of that. Oh, well -- that's the bane of the lack of the -1, Disagree moderation . . .]

        • Re: (Score:3, Interesting)

          by Idiomatick ( 976696 )
          fine... but w/e you did back in ms dos likely isn't a fraction as complex. And with it being done on another's servers there is no need to hold back on complexity... I'm thinking rendering a birds eye shot in LOTR would have taken a damn long time on a phone...

          BTW it took weta 4hrs per frame to render... likely not on a cellphone.
          • Re: (Score:3, Interesting)

            Maybe if you are trying to render an MMO, a single render farm can do less work in total than all the clients rendering from their own POV.

            • by Yvan256 ( 722131 )

              The render farm still has to render all the clients from their own POV. The only thing that's less in this case is the GPU and RAM requirements on the clients. But, assuming H.264 at 1024kbps vs a few kbps to exchange only the players position data, all of their bandwidth requirements just increased by a few hundred times.

              I still say that ISP monthly limits and latency won't make this quite usable for games, but it could do wonders for a LAN game server.

              • If we are talking about ray tracing then the clients can share the path for rays from the light sources through reflections off static objects.

          • by Fred_A ( 10934 ) <fred@ f r e dshome.org> on Monday November 16, 2009 @07:21AM (#30114056) Homepage

            I'm thinking rendering a birds eye shot in LOTR would have taken a damn long time on a phone...

            How come ? Did LOTR feature birds with unusually complex irises ? For, say, most eagles, a yellow disc, a black disk and you're done. Takes milliseconds.

            Granted, the rest of the bird might take a bit longer.

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          You should have tried rendering something other than the simple POV-Ray sphere tutorials. I used to use POV-Ray in MS-DOS and some of my more complex scenes (ie. a model of the solar system with space stations and starships) took weeks to months to render on a 486DX2-66. Take one of those scenes, multiply the details and polygons by a factor of 100, then have it render at a minimum of 60 frames per second.

          So yeah, you're off by quite a bit there.

        • Re: (Score:3, Interesting)

          by Artraze ( 600366 )

          Did your computer have an FPU? You cellphone doesn't, so despite it's 200+MHz(?) clock, you'll be lucky to get much past 10MFLOP/s, especially since the library code may often miss the cache (it's pretty limited on ARM). Also, POV scenes frequently use parametric surfaces, rather than meshes, making calculations easier and much less memory intensive than the high-poly meshes used in the demo scenes.

          So, maybe a month may be a bit long, but I don't really think that it'd be able to do much better than a wee

          • by adolf ( 21054 )

            No.

            I'm talking 386SX class hardware, here. Sure, I overclocked it from 33 to 40MHz, but it was still just a 386SX. With no memory cache. And no FPU. And a 16-bit bus. And 2 megabytes of RAM. (And a bunky DMA controller, but povray never seemed to care much about that.)

            It was years after that before I got to bask in the glory of a Pentium-class machine.

            (Why did you reply to me, anyway? It's just an anecdote. And like most other anecdotes that come from someone else's personal experience: No matter w

        • Re: (Score:3, Insightful)

          A car model will fill about 4GB of RAM while rendering. Does your phone have 4 GB of nice highspeed RAM? Nope? Ok you'll be swapping to slow memory. It takes a modern quad core with 8GB of RAM about let's say 5 hours to render a 1200x1200 rendering. Mobile screens are a quarter that so 5/16 = ~20 minutes. Now let's say a mobile phone now a days is about 1/100th the speed of our quad core 8GB modern system. Even generously giving it only a 1/100th speed hit which is probably 10x-100x off , you're looki

        • And was POV running a global illumination algorithm rather than just vanilla ray tracing? Because the difference in complexity between the two approaches would mean days or months on that cellphone, but it allows for the dynamic lighting changes shown in the video. Last time I saw somebody doing a similar quality of rendering to the demo images they were using Radiance on a relatively modern workstation. Each frame took several hours, on a phone (if you actually had the memory available) the scaling would b

        • Bear in mind that this is not raytracing. NVidia's backend is server obviously using a path tracing algorithm based on the videos; the images start "grainy" and then clear up as they are streamed. Path tracing works like ray tracing with a huge sampling rate, shooting perhaps 30 rays per pixel. Moreover, whereas ray tracers only have to compute rays recursively when they strike a reflective/refractive surface, path tracers always recurse, usually around 5-10 times, for each of the 30 rays per pixel. (T

      • they usefulness to raytrace an image on your cellphone...0. Pretty small difference.

        • Re: (Score:3, Insightful)

          by Idiomatick ( 976696 )
          NO market for super realistic graphics on a phone? (Mind you I do think computers will use this 1million times more...)
          • Re: (Score:3, Interesting)

            by poetmatt ( 793785 )

            bandwidth issues that will continue even into 4g and othwerwise, it has uses, just not mobile. I agree there may be some PC use - this general idea was not unexpected. With adding graphics support to mainstream virtualization this is somewhat of the next step.

      • by Matheus ( 586080 )

        Whatever happened to Hypercosm? They weren't a cloud distributed processing engine.. they were high quality rendering via web done efficiently. It seems getting Hypercosm ported to today's mobile devices would be more productive now than streaming pre-rendered images over the tight pipe.

    • Re:Hours and hours (Score:5, Informative)

      by Romancer ( 19668 ) <`moc.roodshtaed' `ta' `recnamor'> on Monday November 16, 2009 @12:36AM (#30112278) Journal

      Better demo of the capabilities here:

      http://www.youtube.com/watch?v=atcIv1K_gVI&feature=related [youtube.com]

      • From a marketing standpoint this is indeed a much better demonstration of the technology, especially considering nVidia's target audience of nerds.
    • How long until Barswf or Pyrit is ported onto that cloud? :D

  • Alright, now I can play Doom 3 on my Razr cell phone! 0.2 FPS here I come!

  • ... like two days ago [slashdot.org] ...
  • I'd rather manage my scenes on my own computer where I have a complete interface with the work I've done. If they have a service where I could upload my scenes and have them render them for me quickly I'd be happy... but they have to do this real time stuff with minimal ability to edit and experiment with your scene. The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time. I j
    • In real time off the top of my head:

      -Allow the client to see their project from any viewport.
      -Walk throughs.
      -Scripted back-end for web apps. There's already a program running on RealityServer 2.0 which lets you build a room of your house and then place furniture and see a rendering of what your house would look like. Then let's you buy it. Imagine Ikea's catalogue letting you not just shop but actually visualize your house and then just order and have it ready for pickup when you arrive on a cart. Just

    • The only use would probably be for sales men and designers who want to show their work in different lighting to a potential customer... but even then they could render that ahead of time. I just don't get it, could someone enlighten me?

      They can only do that ahead of time if they're the ones making the aesthetic decisions. If I wanted to show the director of a movie an environment and get his feedback, I could make the changes right there for him to see and get the OK right away.

      • Re: (Score:3, Interesting)

        Comment removed based on user account deletion
        • But you wouldn't really need even SD, much less HD for that, would you?

          Yes, you would.

          From what I've seen they use pretty crude story boards or very basic computer animation just to get the fell for it, and then after everything is approved go full res.

          No, this is not true. This is why they hire concept illustrators. In fact, most of the concept paintings end up being a lot higher res than the film itself. As technology progresses, the quality of the illustrations and the pre-viz improves as well. Trust me, the more that can be done to do things like speed up rendering, the more of it you'll see before it hits post.

        • by mikael ( 484 )

          Maybe they could use augmented reality application that could allow the user to take a video or photograph of a scene and be able to add virtual geometry or textures (add tables, change curtain/carpet/sofa textures. There already exists software to do this, but that requires an artist to mark out the borders of the texture. The new scene could be rendered and sent back down to the device.

  • they should have called it CLOUD REALITY!
    • Re: (Score:3, Insightful)

      by aicrules ( 819392 )
      It's called RealityServer 3.0 That has the "buzzwords" of a version number, Server in the name and the ultimate buzzword "Reality"
  • by webbiedave ( 1631473 ) on Monday November 16, 2009 @12:40AM (#30112302)
    I got some reality served to my phone last week in the form of a break up text from my girlfriend. It took four months to render.
    • Pah, that's nothing. I have an app which tells me what the weather is outside! It even picks up where I am and tells me the local weather! I need never leave the house again! Unfortunately, there is no Vitamin D producing app, so my nails are fairly brittle. I might sue Apple over this.

      On a side note, I had the idea last night of disposing with curtains completely and having all of my windows coated with e-Ink style technology to make them as opaque or translucent as I required. I wonder if anybody does th
  • One question: Why? (Score:5, Insightful)

    by adolf ( 21054 ) <flodadolf@gmail.com> on Monday November 16, 2009 @12:47AM (#30112332) Journal

    Summit, in TFA, goes on at different points about a car application -- ie, a system that one might use to preview and/or order new cars. Pick your wheels, your paint, your trim, your seats, and get a few views of the thing in short order*.

    All I can think is that if it were really so important for Ford to give you a raytraced view of the car you're ordering, that the options are so limited that all of them could easily be pre-rendered and send all together. How big are a few dozen JPEGs, anyway?

    Even if a few dozen JPEGs isn't enough: Don't we do this already with car manufacturer websites, using little more than bog-standard HTML and a whole bunch of prerendered images? In what way would having this stuff be rendered in real-time be any more advantageous than doing it in advance?

    Do we really need some manner of fancy client-server process, with some badass cloud architecture behind it, when at the end of the day, we're only going to be shown artificat-filled progressive-JPEG still frames with a finite number of possibilities?

    Everyone, please, go look at the demo video. Neat stuff, I guess, but it's boring. Office with blinds open; same office, blinds partly open. Then, closed. Office at night. Different angle. Woo. It's simple math to figure out how many options there are, and it's just as simple to see that it's easier, cheaper, and better to just go ahead and render ALL of them in advance and be done with it and just serve out static images from then on out.

    If I'm really missing the point here (and I hope I am), would someone please enlighten me as to how this might actually, you know, solve a problem?

    *: Just like a lot of auto manufacturer's websites already do TODAY, using only HTML, static images, and a sprinkling of javascript or (less often) flash.

    • The idea may be to farm out computing power, allowing for customers to avoid the upgrade-climb and for graphics companies to build so many high-end graphics processing units, cutting down on electronic waste, cost of manufacturing, etc. An interesting, though not yet realized, example of this could be the onLive console. Customers purchase what amounts to a modem and play their games via a server farm which computers physics and graphics. By doing this, customers can avoid downloading large batches of graph
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      dude, think of the porno

    • by Anpheus ( 908711 )

      If they start increasing the number of options, a la the Scion brand, then that quickly becomes impractical or impossible. Far easier to render and cache temporarily than to store all possible renders.

      • by adolf ( 21054 )

        It's not strictly binary.

        For instance: In what ways does the color of the paint influence the design of the wheels? Oh, right: It doesn't. How about the interior? Right, sure. A wing? Woo. A trim package? Oh, my. The wheels are still the same.

        It's not a pizza. It's a car.

        There's just not that many variations on a vehicle which have any impact on more than a couple of parts. But, if you think that it is unachievable to prerender these, please go look at Scion's current website, build a car, and w

        • There's just not that many variations on a vehicle which have any impact on more than a couple of parts. But, if you think that it is unachievable to prerender these, please go look at Scion's current website, build a car, and write back. (Note: I haven't been there in years, myself, but I'm confident enough in my theory that I'm willing to let you to prove yourself wrong.)

          ... Let's see Oh I need to install a plugin to build my car... fine.. I wonder what it's for--Oh hey look at that! It's a little crappy real-time 3D renderer! Hahaha. There you go. Your very first example... uses a client side renderer.

          It works but the reflections and lighting is all baked onto the car. Which is to say it looks worse than pretty much any video game made in the last 10 years... but it does employ multi-sample AA.

          This site does remind me of a few things though: if you want a 360 of the ca

    • Re: (Score:2, Insightful)

      by war4peace ( 1628283 )
      One answer: Gaming.
      OK, one more reason: 3D Work at home. I do that (as an amateur) and sometimes even my pretty fast machine takes hours at a time to render some scenes. I could as well send the file to RealityServer 3.0 and then render my scenes faster via a web browser, without having to wait hours and hours. That would be great for several reasons:
      1. While I wait for my machine to render a scene, I do other things and more than often I ask myself what the hell was that thing that I awas trying to accom
    • Speaking from experience... it's currently a HUGE PITA.

      Sure if you have just a side view and a front view it's easy. Render out each wheel seperately. But then what if you want a 360 view of the car now? Ooops. No dice. And what if you want the car color to be reflected in the side view mirrors? All the possible combinations? Well if you give the user complete freedom that means there is an infinite number of renderings you have to do. What if you want to see the car at night? Now you have to doub

    • Re: (Score:1, Interesting)

      by nateb ( 59324 )
      One word: iPhone app.

      Imagine Street View rendered in the direction you are holding your phone, from your position. With all the goodies that that 3D map that someone was building a while back (and sure could be ongoing) plus a live application of the algorithm from Canoma and similar applications, you could have a pretty interesting "virtual" world. Another benefit would be that while using the application, you could be aiding the mapping backend with live GPS to refine the map and the 3D model on top of

      • "Imagine Street View rendered in the direction you are holding your phone, from your position."

        Congratulations, you've invented AR, which has been an app on my phone for about a year now. It's called using the input from the goddamn camera stuck on the front.

    • Too specific (Score:3, Insightful)

      The uses are probably not yet understood. This is cool technology and some of the tens of millions of developers will find good use for it. The interesting bit is that you gain access to a huge render farm without buying a lot of servers. If your load is uneven, this service will save you a lot of money (and power too).

      Anyhow, from the top of my head: Cars, architecture, city planning, visualizing climate change, next-generation GPS navigation devices.

      • Where would they get all the hi-resolution, fully textured, up to date city-wide 3D models from?

        Because unless they have those models this is moot ... you can do a far better job using static images like Google streetview does (and which an iPhone is perfectly capable of rendering in real time).

  • The concept is kinda cool but their demo could have been easily faked. It isn't convincing until I can wander around the room on demand while tweaking the environment.
    As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.
    (useless in the remote access sense, not necessarily useless in a studio environment for architecture or vehicle modelling; although those needs can be met with a rendered video sequence anyway.

    • As well, it's next to useless if it takes a $15K machine to generate the required images in pseudo-realtime for a single session.

      Maybe the rendering cost scales non-linearly.

      • 1 session = 1 computing unit
      • 2 sessions = 1.5 computing units
      • 3 sessions = 1.7 computing units
      • 4 sessions = 1.8 computing units

      ...and so on. Because some of the rendering for the first session can be reused on other sessions.

    • I know slashdot is keen on saying "The Cloud" is a buzzword and meaningless bullshit. But that's because Slashdot is evidently completely clueless to what cloud computing really means. What it means in this case is you pay for the processing you need. You don't buy a $15k server. You pay Amazon or Google or some other cloud processor for render time. If you need 3 seconds of rendering then they charge you 3c for the trouble.

  • Good for VR (Score:4, Interesting)

    by cowtamer ( 311087 ) on Monday November 16, 2009 @02:12AM (#30112682) Journal

    This is a great advancement for high end virtual reality systems, but the current state of "rendering in the cloud" sounds like either a solution looking for a problem or the wrong application of the technology.

    On a future Internet with sub 30 ms latency, this would ROCK. [You could have low-powered wearable augmented reality devices, "Rainbows End" style gaming, and maybe even the engine behind a Snow Crash style metaverse that remote users can log in to].

    NVidia is NOT doing itself a favor with the lame empty office with boring blinds demo. They'd better come up with something sexier quick if they want to sell this (and I don't mean the remote avatar someone posted a link to).

    This reminds me of the "thin client" hype circa 1999. "Thin clients" exist now in the form of AJAX enabled web browsers, Netbooks, phones etc, but that technology took about a decade to come to fruition and found a different (and more limited) niche than all the hype a decade ago [they were supposed to replace worker's PCs for word processing, spreadsheets, etc].

    • On a future Internet with sub 30 ms latency, this would ROCK.

      Considering the maximum distance attached to even theoretically reaching that 30ms threshold...there'd have to be a lot of these farms ;-)

    • by LS ( 57954 )

      [they were supposed to replace worker's PCs for word processing, spreadsheets, etc].

      Um they have for a large portion of the working populace. The last two companies I've worked at use Google docs.

      LS

  • Resident Evil and a number of other action/adventure and RPG games from the mid to late 90's innovated this, albeit in a much more limited way. Character enters a room, switch to another image. Character progresses further into the room, switch to a more appropriate angle. All the environments are pre-rendered, and 3D characters play around in them as though they are real-time. It always looked good on the PS1, and I admired the simplicity of the method and its impressive results. It looks like they are jus
  • Still no cure for cancer :(

  • So, you're going to prepare high-quality images in response to requests from mobile devices. Your "cloud", a vast farm of massively powerful rendering engines, will prepare these images thousands of times more quickly than your iPhone's pathetic processor, and stream them back to your display. Neato.

    Now, since this works so well, millions of mobile users will flock to the service. Thousands at a time will be requesting images. Fortunately, that render farm is still thousands of times faster than a mobil

    • They could be caching the rendered images. "Location: x, looking in direction: y" does not have to be rendered more than once.

  • I, like many of you here was wondering what the hell this could possibly be useful for, up until I viewed the video.

    The answer, clearly, is porn.

    It all makes sense now!

  • Calling this a "real-time" raytracing server seems a bit disingenuous. Is it blazing fast? Yes. Is it real-time (as the demo claims)? I suppose it's symantics, but I think it would have to be x-frames-per-second to be real-time. TFA calling it near-real-time seems a little more reasonable, but still hype. Can "within seconds" still be considered even near-real-time?
  • The video narration is inaccurate. What you see there is not a progressive JPEG loading (they might be using progressive compression for the JPEG, but it doesn't matter).
    What you're seeing is progressive refinement, which is a raytracing rendering technique that starts to show an image immediately and continuously adds detail (rather than rendering the image in full detail immediately). The light and dark splotches you initially see are a typical artifact of low-detail radiosity rendering.
    More information h [google.com]
  • FWIW I suggested rendering and compositing multiple video streams into a single one then download to a local mobile terminal a number of years ago. I guess you just wait until you get good enough hardware and then when you hit the sweet spot everything just materializes.

Genius is ten percent inspiration and fifty percent capital gains.

Working...