Ray Tracing for Gaming Explored 266
Vigile brings us a follow-up to a discussion we had recently about efforts to make ray tracing a reality for video games. Daniel Pohl, a research scientist at Intel, takes us through the nuts and bolts of how ray tracing works, and he talks about how games such as Portal can benefit from this technology. Pohl also touches on the difficulty in mixing ray tracing with current methods of rendering. Quoting:
"How will ray tracing for games hit the market? Many people expect it to be a smooth transition - raster only to raster plus ray tracing combined, transitioning to completely ray traced eventually. They think that in the early stages, most of the image would be still rasterized and ray tracing would be used sparingly, only in some small areas such as on a reflecting sphere. It is a nice thought and reflects what has happened so far in the development of graphics cards. The only problem is: Technically it makes no sense."
Adaptive techniques: make or break (Score:4, Interesting)
Re: (Score:2)
Re: (Score:3, Insightful)
Starting the rays from the light source would be less effective because once you start adding a few more light sources to the scene you are going to have more sets of rays to keep track of then if you were to just cast the rays from the camera. Even worse, you would have to run calculations on rays that would never hit the camera. Either way, you would still have optimize the scene with BSP's or some sort of data structu
Re: (Score:3, Informative)
No cookie for whoever gave the bozo above me "insightful".
Casting from the light source forward (photon tracing) is too expensive and is rarely done, though it does give you more accurate effects - caustics, colored bleeding from reflected illumination etc.
Some of that can be approximated more cheaply with radiosity algorithms.
Re:Adaptive techniques: make or break (Score:5, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Ignoring that, taking into account that a coder/modeler isn't all that unlikely, you can't just slap up a game that's a notch above 'hello world' in complexity and wake up to a thriving community eager to have your children. If you want to start a game without paying people to work on it, it'll take time (probably months to years) and a few very interested/dedicated people to get it to go anywhere.
Why do you think so few FOSS games have a plot?
"How will ray tracing for games hit the market?" (Score:5, Funny)
Now hear this (Score:5, Insightful)
See, the two are incompatible because the purpose is different. With games, the idea is "How realistic can we make something look at a generated rate of 30 frames per second". But with photorealistic rendering the idea is "How realistic can we make something look, regardless of the time it takes to render one frame."
And as time goes on and processors become faster and faster, the status quo for what people want becomes higher. Things like radiosity, fluid simulations and more becomes more expected and less possible to do in real time. So don't ever count on games looking like those still images that take hours to make. Maybe they could make it look like the pictures from 15-20 years ago. But who cares about that? Real time game texturing already looks better than that.
already done with Quake (Score:3, Interesting)
Re: (Score:3, Interesting)
Wow.
(Like most ray tracing advocates, he points out that ray tracing is "perfect for parallelization", but this ignores that so is standard 3D rendering - graphics cards have been taking advantage of this parallisation for years.)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
I for one welcome our new raytracing overlords...
Re: (Score:3, Informative)
Re:Now hear this (Score:5, Insightful)
Re:Now hear this (Score:4, Insightful)
I don't have to look at the damn graph to tell you that what people are going to want is this [blenderartists.org]
And what they are going to get is this [pcper.com]
And, they should just be happy with this [computergames.ro] (which, is pretty awesome)
My point is that real time photorealistic rendering will never catch up with what people expect from their games. It will always be behind. If all you want is mirrors, then find a faster way to implement them at the expense of a bit of quality.
Re: (Score:2)
If your going to do it, you might as well do it properly.
It cant be done so dont worry about it.
Re:Now hear this (Score:5, Insightful)
Yes, what can be produced will still be behind what people want or expect. But ray tracing will be less far behind than rasters in the near future.
All of this is according to TFA; I don't know much about this from a technical standpoint.
Re:Now hear this (Score:4, Insightful)
The majority of quality improvement these days seems to come from post processing effect and clever textures and programmable shader use. If you want to get fur on an animal via polygons you will have to spend a load of rendering time, but if you fake it with textures you can get pretty good results on todays hardware. Same with shadows and a lot of other stuff. Doing it 'right' takes a load of computing power, faking it works in realtime.
I simply haven't seen all that much raytracing that actually could compete with current day 3D hardware, those that do look better then todays 3D hardware is done in offline renderers and takes hours for a single frame.
realism (Score:3, Interesting)
This is a rather good point; at some point, adding more polygons doesn't do anything to make an unrealistic scene look more realistic. This is true for raytracing and polygon rendering alike. Ray tracing has some advantages here, for what it's worth; it scales better with huge scenes, and it can represent non-triangular primitives natively (though all the fast ray-tracers I've seen only deal with triangles). I wouldn't
Re:Now hear this (Score:5, Informative)
One of the biggest hurdle in game graphics is geometry detail. Normal mapping is just a hack to make things appear more detailed, it breaks down in some situations. Raytracing will allow *much* higher geometry detail than rasterisation. Better reflection, refraction, caustics and so on are just gravy.
Re:Now hear this (Score:5, Interesting)
Furthermore, rasterization requires tricks (many would call them "hacks") to make the scene approach realism. In games today, shadows are textures (or stencil volumes) created by rendering more passes. While they look "good enough", they still have artifacts and limitations falling short of realistic. Shadows in raytracing come naturally. So do reflections, and refractions. Add some global illumination and the scene looks "real".
Rasterization requires hacks like occlusion culling, depth culling, sorting, portals, levels of detail, etc to make 3D engines run realtime, and some of these algorithms are insanely hard to implement for best case scenarios, and even then you're doing unnecessary work and wasting unnecessary ram rendering things you never see. Raytracing only renders what's on the screen.
That being said, I don't think raytracing will completely replace rasterization, at least not right away. Eventually, some games may incorporate a hybrid approach like most commercial renderers do today (scanline rendering for geometry, add raytracing for reflections and shadows). Eventually, 3D hardware will better support raytracing, and maybe in another decade we'll begin to see fast 3D engines that use ray tracing exclusively.
Re:Now hear this (Score:4, Insightful)
It's like arguing that we should go back to raycasting because it can render a textured cube many times faster than a 3D rasterized engine could.
You're being rather shortsighted.
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re:Now hear this (Score:4, Interesting)
Are there any reasons why current GPU designs can't be adapted for hardware assisted raytracing?
Re: (Score:2, Informative)
In a word, yes. Check Beyond3D.com forum's GPGPU subforum (lots of raytracer stuff discussed and introduced) -- and also the CellPerformance subforum about raytracers on PS3's CPU (those SPEs rock for that) running on Linux of course.
Re: (Score:3, Interesting)
As an aside, isn't the work to combine your current bog-standard processors with inbuilt "graphics processors" (a la AMD Fusion and Intel Larrabee) just going to turn every consumer CPU into a Cell-ish architecture within five years or so - a number-crunching core or two plus an array of "dumb" scalar processors?
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Re:Now hear this (Score:5, Informative)
No, they have a graph that very clearly shows that raytracing while using a binary tree to cull non-visible surfaces has a performance advantage over rasterizing while using nothing to cull non-visible surfaces. Perhaps someday a raster engine will regain that advantage by using these "BSP Trees" [gamedev.net] as well.
Re: (Score:3, Interesting)
This isn't what we need in games (Score:5, Insightful)
I guess one has to state the obvious in that by moving to a process which is not implemented in silicon, as with current graphics cards, the work must necessarily be done in software. That means it runs on CPUs and that's something Intel is involved in where as when you look at the computational share of bringing a game to your senses right now, NVIDIA and ATI/AMD are far more likely to be providing the horsepower than Intel.
But really, even if this wasn't a vested interest case (and it may not be, no harm exploring it after all) - the fact remains that we don't actually need this for games. Graphics hardware has gone down an entirely different route whereby you write little shader programs which create surface visual effects on top of the bread and butter polygons and textures. This is a well established system by now and has a naturally compressive effect. It's like making all your visual effects procedural in nature rather than giving objects simple real-world textures and then doing a load of crazy maths to simulate reality. It works very well. Rememeber a lot of the time you want things to look fantastical and not ultra-realistic so lighting is some of the challenge.
Games aren't having a problem looking great. They're having a problem looking great and doing it fast enough and game developers are having a problem creating the content to fill these luscious realistic-looking worlds. That's actually what's more useful, really. Ways to aid game developers create content in parallel rather than throwing out the current rendering strategy adopted world wide by the games industry.
Re: (Score:3, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2)
"I think the problem with the current system is that it scales horribly."
On the contrary, it scales very well actually. You can simply use more pixel pipelines to do things in parallel to do less passes (which is what you see in most graphics hardware including the current consoles), or you can render alternate lines, or chunks of the display etc for multiple entire chunks of hardware such as SLI/Crossfire on the PC.
The problem you describe is essentially that any complicated visual processing in very
Re: (Score:2)
We MIGHT have the technical capability to encode 4K video in 4:4:4 by 2015 in realtime with upper-pro-level gear - it's a stretch. We won't see cameras like the Red One standardize in movie studios until well after this decade, much less television studios. 33 megapixels for a broadcast standard is ludicrous - and will be impossible even for the highest end cinema to implement in 7 years.
I'd settle for a solid, end-to-end 1080p60 in VC-1 as a broadc
Re: (Score:2)
Re: (Score:2)
And why do you think that is any different with raytracing? Model geometry is very similar and I doubt that you will understand the math behind BDRF (Bidirectional Reflectance Function, a way to describe surface characteristics) or scattering in participating media without some time to study it. In fact they are so similar that OpenRT [openrt.de] is a raytracer with an interface quite similar to OpenGL. The shader system is completely different, though as it wouldn't make sense to limit it to GPU-shaders without hardwa
Re:This isn't what we need in games (Score:4, Insightful)
If the parallelization trend continues like it's progressing now, manicore CPUs are probable to arrive before 2010. Also, both AMD and Intel appear to be undertaking steps in the direction of enthusiast-grade multi-socket systems, increasing the average number of cores once again. Assuming raytracing can be parallelized as gread as TFA makes it sound, rendering could just return to the CPUs. I'm no expert, but it does sound kinda nice.
Re: (Score:2, Informative)
Yes, and standard rasterisation methods are embarrassingly parallel, too. As the other reply points out, we already have parallel processors in the form of GPUs. So I don't see that either multicore CPUs, or that raytracing is easily parallised, is going to make raytracing suddenly catch up.
What we might see perhaps is that one day processors are so fast enough that peo
Re: (Score:2)
2010 sounds realistic for a top-shelf equipment for the few chosen. 2020 looks more like consumer-grade electronics.
Not the GPU. (Score:2)
Not that much. Because GPU achieve ultra high parallelism using SIMD (single instruction multiple data) mechanisms.
And those aren't very efficient with very diverging code path.
i.e.:
- in traditional triangle rasterisation a lot of pixels are calculated at the same time for the same triangle. Thus a lot of thread will be running the exact same shader code on the GPU. SIMD is a nice increase of performance.
Re: (Score:2)
Re: (Score:2)
He also hinted that ray tracing could make collision detection much better, so that you dont get hands/guns sticking through thin walls/doors, which would also be good.
But hey I'm not rooting for one of the other, game devs will use whatever is best and
Re: (Score:2)
... a way which was pioneered by the Reyes renderer (if I am not i
Hardware product dependence not good (Score:3, Insightful)
Your post is heavily dependent on availability of suitable hardware. Software can be ported and recompiled to new platforms, but hardware-dependent software has a short lifespan precisely because getting usable hardware doesn't last particularly long. There's a lot of otherwise good enjoyable games which are unplayable now because they depended on the early Voodoo cards or other unrepeated graphics hardware. Now with CPU power ramping back up (rel
Raytracing scales up far better... (Score:2)
Re:Raytracing scales up far better... (Score:4, Insightful)
People keep saying this, that raytracing scales up better than rasterization. It's simply not true. Both of them have aspects that scale linearly and logarithmically. They do scale differently, but in a related sort of wy.
Raytracing is O(resolution), and O(ln(triangles)), assuming you already have your acceleration structures built. But guess what? It takes significant time to built your acceleration structures in the first place. And they change from frame to frame.
Rasterization is O(ln(resolution)), and O(triangles). Basically, in a rasterizer, we only draw places that we have triangles. Places that don't have triangles have no work done. But the thing is, we've highly pipelined our ability to handle triangles. When people talk about impacting the framerate, I want to be clear what we're talking about here: adding hundreds, thousands, or even a million triangles is not going to tank the processing power of a modern GPU. The 8800 Ultra can process in the neighborhood of 300M triangles per second. At 100 FPS, that'd be (not suprisingly) 3M triangles per frame.
Modern scenes typically run in the 100-500K triangles per frame, so we've still got some headroom in this regard.
Cheers.
Re: (Score:2)
Creation requirements may be less (Score:2)
Re: (Score:2)
Also of interest ... (Score:2)
CC.
Re: (Score:2)
Wow. (Score:3, Insightful)
Further Reading (Score:5, Interesting)
http://realtimecollisiondetection.net/blog/?p=38 [realtimeco...ection.net]
Re:Further Reading (Score:4, Insightful)
In short, I don't buy the summary article's viewpoint because at times he can be confusing or ambiguous with respect to his "proof." I like the parent's linked article, because the author of that article at least provides something computationally meaningful to think about.
How far we've come in just 15 years (Score:5, Interesting)
I do remember that someone found some shortcuts for raytracing, and I wonder if that shortcut is applicable to realtime rendering today. From what I recall, the shortcut was to do the raytracing backwards, from the surface to the light sources. The shortcut didn't take into account ALL reflections, but I remember that it worked wonders for transparent surfaces and simple light sources. I know we investigated this for our business, but at the time we also were considering leaving the industry since the competition was starting to ignite. We did leave a few months early, but it was a smart move on our part rather than continue to invest in ever-faster hardware.
Now, 15 years later, it's finally becoming a reality of sorts, or at least considered.
Many will say that raytracing is NOT important for real time gaming, but I disagree completely. I wrote up a theory on it back in the day on how real time raytracing WOULD add a new layer of intrigue, drama and playability to the gaming world.
First of all, real time raytracing means amazingly complex shadows and reflections. Imagine a gay where you could watch for enemies stealthily by monitoring shadows or reflections -- even shadows and reflections through glass, off of water, or other reflective/transparent materials. It definitely adds some playability and excitement, especially if you find locations that provide a target for those reflections and shadows.
In my opinion, raytracing is not just about visual quality but about adding something that is definitely missing. My biggest problem with gaming has been the lack of peripheral vision (even with wide aspect ratios and funky fisheye effects). If you hunt, you know how important peripheral vision is, combined with truly 3D sound and even atmospheric conditions. Raytracing can definitely aid in rendering atmospheric conditions better (imagine which player would be aided by the sun in the soft fog and who would be harmed by it). It can't overcome the peripheral loss, but by producing truer shadows and reflections, you can overcome some of the gaming negatives by watching for the details.
Of course, I also wrote that we'd likely never see true and complete raytracing in our lives. Maybe I'll be wrong, but "true and complete" raytracing is VERY VERY complicated. Even current non-real time raytracing engines don't account for every reflection, every shadow, every atmospheric condition and every change in movement. Sure, a truly infinite raytracer IS impossible, but I know that with more hardware assistance, it will get better.
My experience over the years was ALWAYS with static images that were raytraced. They looked great, but it wasn't until I experienced raytraced animations (high res, many reflective and transparent layers with multiple light sources and a sun-source) that I really saw the benefit and how it would aid in gaming.
The next step: a truly 3D immersive peripheral video system, maybe a curved paper-thin monitor?
Re:How far we've come in just 15 years (Score:5, Funny)
Re: (Score:3, Interesting)
Re: (Score:3, Funny)
Imagine a gay who could watch for enemies stealthily by monitoring shadows or reflections
There, fixed it for you, it makes a bit more sense now, I guess..
Re: (Score:2)
All this is fine, but I think we will have to wait another 20+ years for computers to be fast and cheap enough before this becomes a reality.
The next step: a truly 3D immersive peripheral video system,
Re: (Score:2)
Re: (Score:2)
I'm not sure I agree, only because we're currently considering what horsepower we would need tomorrow to do it the way we do it today. I've looked at the technology many times over 15 years, including writing a few theoretical thoughts that I sold to private developers back in the day. One thing I looked at was a pre-rendered set of values for each object and face th
Re: (Score:2)
Holy Grail? Maybe not. (Score:4, Informative)
Well DUH!! (Score:2, Funny)
No kidding?? Well if you drive a car with a 16 cylinder, 1500 HP engine, it's a LOT faster than a 4 cylinder compact. More on this story as it develops.
He's talking about scaling (Score:2, Informative)
Re: (Score:2)
Re: (Score:2, Insightful)
If you knew anything about parallel algorithms you'd know that it's fairly common to have things that scale with more like 75% efficiency, and you're still happy. It's all down to how much communication is required, and how often it's required. With raytracing normally (as in, a decade ago when I knew anything about this) you'd parallelise over multiple frames. With real-time rendering you'd need to parallelise within a frame. Depending on your
Re: (Score:2)
I'm sure you thought you were being clever, but you weren't.
Wow, aren't you a ray of sunshine?? I didn't say I knew anything about parallel algorithms, I admit I know nothing about them. I can grasp what you (and some others) have replied, but none of that information was stated in conjunction with the line I quoted from TFA. My point? It just struck me funny to have this odd statement that (at least on its surface) seems to state something OBVIOUS which is, 16 processors are faster than 1. So for crying out loud, lighten up, it was a joke.
in the player's best interests, natch (Score:5, Insightful)
There, fixed that for you.
Raytracing the shiny first-intersection makes a lot of sense, even if it doesn't sell more CPUs. Sure, some day we will all have stunning holistic scene graphs that fit entirely within the pipeline cache of the processor, but it's not yet time for that.
Every change in developing a game engine requires changes in the entire toolset to deal with how to produce assets, how to fit within render time limit budgets, and how to model the scene graph and the logic graphs so that both are easily traversed and managed.
In the meantime, we have a pretty nice raster system right now, with a development strategy that provides for all those needs. You might not think that fullscale raytracing would upset this curve, but I'm not convinced. What do you do when a frame suddenly is taking more than 1/30sec to render, because the player is near a crystalline object and the ray depth is too high? How do you degrade the scene gracefully if your whole engine is built on raytracing? We've all played games where things like this were not handled well.
I contend that game AI is sometimes more advanced than academic AI because game developers are results-oriented and cut corners ruthlessly to achieve something that works well enough for a niche application. The same goes for game graphics: 33 milliseconds isn't enough to render complex scene graphs in an academically perfect and general way, it will require the same results-oriented corner-cutting to nudge the graphics beyond what anyone thought possible in 33ms. If that means using raytracing for a few key elements and ray-casting/z-buffering/fragment-shading the rest of the frame, game developers will do it.
Re:in the player's best interests, natch (Score:4, Interesting)
I contend that game AI is almost always laughably bad (or pretty much non-existent). I realize Mass Effect doesn't exactly win a lot of points for its AI, but the problems very nearly ruined a AAA-developer/large-budget game. I remember one point where, out of battle, I was telling one of my squad to go somewhere. There was pretty much one feature in the room - a wall intersecting the direct path to the target point. Getting around this wall would require one small deviation from the direct path. Instead of walking around the wall, the character just stood there "I'm going to need a transporter to get there" or something.
I can't imagine how the "AI" could have been implemented in order for that kind of failure to be possible (and common - I had repeated problems with this through the game). I assume they must have just cheated vigorously on the "follow" logic, as - if they'd used the same system - you'd be losing your squad around every corner.
Really, though, none of the maps were that complicated. The "navigable area" map for the problem location couldn't have had more than 200 or so vertices (it was a very simple map, one of the explorable bunker type areas). That's few enough that you could just Floyd-Warshall the whole graph. But, more generally, a stupidly naive, guessing DFS (that capped at 3 turns or something) would have worked just fine too. I can't think of a solution or algorithm that would fail the way their system did constantly. Mind-boggling.
Stepping back a bit, this shouldn't even be a consideration. There are simple, fast, robust algorithms that could handle this kind of trivial pathing problem without putting any strain on CPU or occupying more than a couple pages of code. That they don't have a better solution says that they (and most of the games industry, in my experience as a player) value AI at very close to 0.
Now it's a good time for a new Amiga. (Score:3)
When the Amiga was released, it was a quantum leap in graphics, sound, user interface and operating system design. It could run full screen dual-playfield displays in 60 frames per second with a multitude of sprites, it had 4 hardware channels of sound (and some of the best filters ever put on a sound board), its user interface was intuitive and allowed even different video modes, and its operating system supported preemptive multithreading, registries per executable (.info files), making installation of programs a non-issue, and a scripting language that all programs could use to talk to each other.
20 years later, PCs have adopted quite a few trends from the Amiga (the average multimedia PC is now filled with custom chips), and added lots more in terms of hardware (hardware rendering, hardware transformation and lighting). It seems that the problems we had 20 years ago (how to render 2d and 3d graphics quickly) are solved.
But today's computing has some more challenges for us: concurrency (how to increase the performance of a program through parallelism) and, when it comes to 3d graphics, raytracing! Indicentally, raytracing is a computational problem that is naturally parallelizable.
So, what type of computer shall we have that solves the new challenges?
It's simple: a computer with many many cores!
That's right...the era of custom chips has to be ended here. Amiga started it for personal computers, and a new "Amiga" (be it a new console or new type of computer) should end it.
A machine with many cores (let's say, a few thousand cores), will open the door for many things not possible today, including raytracing, better A.I., natural language processing and many other difficult to solve things.
I just wish there are some new RJ Micals out there that are thinking of how to bring concurrency to the masses...
No, raytracing is BETTER adapted to custom chips. (Score:2)
A custom chip that has hundreds or thousands of dedicated raytracing processors that run in parallel. Raytracing is embarrassingly parallelizable, so it's far better suited to specialized processors than vectorizing.
Saarland University, the people who designed OpenRT in the first place, were getting 60+ frames a second on a hardware raytracing engine in 2005... and their raytracing engine only had 6 million gates and ran at 75 MHz. Today
Re: (Score:2)
I wish people would stop saying this, the real-estate budget for transisters, memory bandwidth and a lot of other things get in the way of "have the cpu all do it". There's a reason custom chips have cornered the market ever since 3Dfx and the other first generation 3D cards from 10 years or so ago. Since no matter how fast a CPU is, you can't compete with the degree of specialization (not to mention experience) a company designing custom chips
Don't get it, (Score:2)
No you don't (Score:2)
That is an optimization that is used today. It is NOT a law. Think the real world, just because that huge billboard is miles away doesn't mean some guy runs up to it and tears down the paper and puts a low res version up on it. The entire world in RL and 3D has the same detail no matter where it is.
As you shoot the ray, it finds the entire world the same size and detail. This is actually one of the problems, for proper raytracing you can't use a lower res model for faraway objects because then the scene mi
General purpose CPUs: a REALLY bad way to do this. (Score:5, Interesting)
Here's a debate between Professer Slusallek and chief scientist David Kirk of nVidia: http://scarydevil.com/~peter/io/raytracing-vs-rasterization.html [scarydevil.com] .
Here's the SIGGRAPH 2005 paper, on a prototype running at 66 MHz: http://www.cs.utah.edu/classes/cs7940-010-rajeev/sum06/papers/siggraph05.pdf [utah.edu]
Here's their hardware page: http://graphics.cs.uni-sb.de/SaarCOR/ [uni-sb.de]
There is some raytracing already... sort of (Score:2)
What you have is ray tracing in texture space, but that texture is brought to the screen via conventional scanline rasterization methods. Sort of. My glsl parallax shader code sucks though ( looks all gelatinous close up ) so I'm no expert....
Awesome. (Score:2, Funny)
This Quote Sums it Up (Score:2)
Not ray tracing, radiosity (Score:5, Interesting)
It's amusing to read this. This guy apparently works for Intel's "find ways to use more CPU time" department. Back when I was working on physics engines, I encountered that group.
Actually, the Holy Grail isn't real time ray tracing. It's real time radiosity. Ray-tracing works backwards from the viewpoint; radiosity works outward from the light sources. All the high-end 3D packages have radiosity renderers now. Here's a typical radiosity image. [icreate3d.com] of a kitchen. Radiosity images are great for interiors, and architects now routinely use them for rendering buildings. Lighting effects work like they do in the real world. In a radiosity renderer, you don't have to add phony light sources to make up for the lack of diffuse lighting.
There's a subtle effect that appears in radiosity images but not ray-traced images. Look at the kitchen image and look for an inside corner. Notice the dark band at the inside corner. [mrcad.com] Look at an inside corner in the real world and you'll see that, too. Neither ray-tracing nor traditional rendering produces that effect, and it's a cue the human vision system uses to resolve corners. The dark band appears as the light bounces back and forth between the two corners, with more light absorbed on each bounce. Radiosity rendering is iterative; you render the image with the starting light sources, then re-render with each illuminated surface as a light source. Each rendering cycle improves the image, until, somewhere around 5 to 50 cycles, the bounced light has mostly been absorbed.
There are ways to precompute light maps from radiosity, then render in real time with an ordinary renderer, and those yield better-looking images of diffuse surfaces than ray-tracing would. Some games already do this. There's a demo of true real-time radiosity [dee.cz], but it doesn't have the "dark band in corners" effect, so it's not doing very many light bounces. Geometrics [geomerics.com] has a commercial real-time game rendering system.
Ray-tracing can get you "ooh, shiny thing", but radiosity can get to "is that real?"
Nice introduction, but wrong conclusion (Score:4, Informative)
In fact, his points actually show why a hybrid is perfect: most surfaces are not shiny, refractive, a portal, etc. Most are opaque - and a rasterizer is much better for this (since no ray intersection tests are necessary). He shows pathological scenes where most surfaces are reflective; however, most shots do show a lot of opaque surfaces (since Quake 4 does not feature levels where one explores a glass labyrinth or something).
Yes, if a reflective surface fills the entire screen, its all pure ray tracing - and guess what, that is exactly what happens in a hybrid. Hybrid does not exclude pure ray tracing for special cases.
Ideally, we'd have a rasterizer with a cast_ray() function in the shaders. The spatial partitioning structure could well reside within the graphics hardware's memory (as an added bonus, it could be used for predicate rendering). This way haze, fog, translucency, refractions, reflections, shadows could be done via ray tracing, and the basic opaque surface + its lighting via rasterization.
Now, I keep hearing the argument that ray tracing is better because it scales better with geometric complexity. This is true, but largely irrelevant for games. Games do NOT feature 350 million triangles per frame - it just isn't necessary. Unless its a huge scene, most of these triangles would be used for fine details, and we already have normal/parallax mapping for these. (Note though that relief mapping usually doesn't pay off; either the details are too tiny for relief mapping to make a difference, or they are large, and in this case, traditional geometry displacement mapping is usually better.) So, coarse features are preserved in the geometry, and fine ridges and bumps reside in the normal map. This way, triangle count rarely exceeds 2 million triangles per frame (special cases where this does not apply include complex grass rendering and very wide and fine terrains). The difference is not visible, and in addition the mipmap chain takes care of any flickering, which would appear if all these details were geometry (and AA is more expensive than mipmaps, especially with ray tracing).
This leaves us with no pros for pure raytracing. Take the best of both worlds, and go hybrid, just like the major CGI studios did.
Re: (Score:2, Interesting)
Re: (Score:2)
(I'm genuinely interested. Got some links for further reading?)
AND...it doesn't produce realistic images! (Score:3, Interesting)
I guess you could use it for shadows... (Score:2)
A hybrid renderer might produce slightly better shadows than we have today but we still need orders of magnitude more power before it happens. Right now we're pushing the limit of graphics cards without ray tracing. Adding ray tracing at each pixel will make your pixel shaders hundreds of times slower.
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
POV-Ray has for many years included a radiosity engine that works alongside (IIRC, actually as a preliminary step before) the actual "raytracer". This enables it to produce scenes that involve radiosity effects and not just what can be done by raytracing alone (at the cost of taking more time than just raytracing the scene.)
How germane it is discussions of "raytracing" as a method of rende
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)