New Graphics Firm Promises Real-Time Ray Tracing 136
arcticstoat writes "A new graphics company called Caustic Graphics reckons it's uncovered the secret of real-time ray tracing with a chip that 'enables your CPU/GPU to shade with rasterization-like efficiency.' The new chip basically off-loads ray tracing calculations and then sends the data to your GPU and CPU, enabling your PC to shade a ray-traced scene much more quickly. Caustic's management team isn't afraid to rubbish the efforts of other graphics companies when it comes to ray tracing. 'Some technology vendors claim to have solved the accelerated ray tracing problem by using traditional algorithms along with GPU hardware,' says Caustic. However, the company adds that 'if you've ever seen them demo their solutions you'll notice that while results may be fast — the image quality is underwhelming, far below the quality that ray tracing is known for.' According to Caustic, this is because the advanced shading and lighting effects usually seen in ray-traced scenes, such as caustics and refraction, can't be accelerated on a standard GPU because it can't process incoherent rays in hardware. Conversely, Caustic claims that the CausticOne 'thrives in incoherent ray tracing situations: encouraging the use of multiple secondary rays per pixel.' The company is also introducing its own API, called CausticGL, which is based on OpenGL/GLSL, which will feature Caustic's unique ray tracing extensions."
One step closer (Score:4, Interesting)
While it may be underwhelming to some, I'm more than happy to see people working on this kind of tech. Sure, we've moved on from just "simple" ray tracing to using things like GI, etc, but in time we'll have that in real time as well. Some apps are already doing some tricks to enable real time GI and other tricks. (the key word being tricks, since they're not totally physically accurate). Obviously real time will always lag behind, but I look forward to the future.
ray tracing - not just for chrome spheres anymore (Score:2)
You mention that things have 'moved on' from ray tracing to GI - but keep in mind that most GI methods (and certainly QMC sampling) -are- largely based on ray tracing. When people say 'ray tracing', we're not just talking about chrome spheres or perfect glass..glasses. It's the fundamental concept of 'tracing a ray' in the scene - and that fundamental concept applies not just to direct surface (illumination) calculations and reflections/refractions, but also to fuzzy reflections/refractions, area-sampled
Re: (Score:3, Insightful)
Re:ray tracing - not just for chrome spheres anymo (Score:4, Informative)
"Global Illumination"
It's a bit of a not-so-well-defined term, really, but within the context of current generation renderers, global illumination involves calculating not just direct lighting (i.e. a spot lighting a wall), but also diffuse indirect lighting (the light hitting the wall (dimly) illuminating the rest of the room) and even specular indirect lighting (such as caustics - like the light patterns you see in pools).
2009 (Score:5, Funny)
2009 is the year of the ray traced desktop.
Re: (Score:3, Insightful)
Can't wait for the ray-traced BSD desktop version of Duke Nukem Invents The Flying Car.
Re: (Score:1)
Seems we've been waiting forever for Duke.
Re: (Score:3, Funny)
Seems we've been waiting forever for Duke.
I think in Soviet Russia, Duke Nukem waits for YOU!
Re: (Score:2)
Re: (Score:1, Funny)
Ray-traced BSD is dying.
Re: (Score:2)
Can't wait for the ray-traced BSD desktop version of Duke Nukem Invents The Flying Car.
Not a BSD desktop... a HURD desktop!
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:3, Interesting)
When will real time ray tracing happen... Real time will need to be above about 30 frames/second.
Lets say back in the year 2000 it took 1/2 an hour to render a high resolution complex page.
So we will apply Mores law With a conservative approach to it. lets say they double speed every 24 months (makes the math easier too)
2002 it would take 15 minutes
2004 it would take 7.5 minutes
2006 it would take 3.75 minutes
2008 it would take 1.875 minutes
2010 it would take 56.25 second
2012 it would take 28.125 seconds
201
Shitty summary! (Score:5, Informative)
Stop copying and pasting the article to generate almost the entire summary, especially when you don't do it right. The However, the company adds that 'if you've ever seen them demo their solutions you'll notice that while results may be fast -- the image quality is underwhelming, far below the quality that ray tracing is known for.' makes it look like you're talking about the Image quality of Caustic's new solution, which is obviously wrong. Here's the real paragraph:
Re: (Score:2)
Thanks. Your clarification actually caused me to rtfa. I figured if the summary was actually accurate, there was no point in the article, as it was likely just a bunch of slop. And while it may still have been a bunch of marketing slop after all, at least it was interesting. :)
Re: (Score:3, Insightful)
Re: (Score:1, Insightful)
Perhaps, but it's mostly ScuttleMonkey's fault for posting such a misleading summary.
Re: (Score:2)
Why's a lightbulb better than a pregnant stripper? You can unscrew a lightbulb!
Re: (Score:2)
Ironically, I failed to get my HTML correct, which suggests to me that I also remind you of the value of preview.
Re: (Score:2)
Re: (Score:2)
For all the crap that Roland took, at least you were guranteed that the summary on /. was not just a copy and paste of the first paragraph of the article. The copy and past annoys me to no end.
Re: (Score:2)
It seems like a silly, pointless waste of time to reword a summary.
"Caustic"? (Score:2, Funny)
Do they get their chips from Flammable Systems, and their capacitors from Toxic Components Inc?
Re:"Caustic"? (Score:5, Informative)
Re: (Score:3, Informative)
Or the skewed image of a star caused by an imperfect telescope lens.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Surely they could have chosen a better name. (Score:1, Funny)
Re:Surely they could have chosen a better name. (Score:5, Informative)
Names that require explanation aren't good choices (Score:4, Insightful)
I remember when I first saw a very poorly drawn, shaky image of an animal and read that it was a Gnu, and read how clever the name was considered to be since it was, they said, "recursive": GNU is Not Unix.
It didn't bother the enthusiasts that most people in the world can't pronounce the name and have never seen a Gnu.
They found someone with artistic ability to make a better image of a GNU [pasteris.it], but I've seen no evidence that anyone with technical knowledge realizes the depth of the self-defeat in choosing an obscure reference to an obscure animal.
To most people the word "caustic" means only "capable of burning, corroding, or dissolving".
Re: (Score:2)
To most people the word "caustic" means only "capable of burning, corroding, or dissolving".
And most people aren't their target audience.
They're selling to people who do know what caustics are. And in their minds caustics means "Too slow to use in an animation."
Non-technical managers often make final decisions. (Score:2)
Re:Names that require explanation aren't good choi (Score:2)
But a graphics company choosing a graphics related name, their target market will understand what it means even if noone else does.
Re: (Score:2)
It seems like GNU was not self-defeating, and that the gnu is no longer an obscure animal. Other than that...
Say, what was your point again?
BTW, the meaning of "caustic" to most people doesn't have much importance since most people won't be the direct customers of Caustic Graphics. The name does have a lot of meaning to the company's potential market: caustics are generally the most expensive and often critical part of photorealistic rendering. A company that chooses that word as part of its name has go
Re: (Score:1)
For a foundation that might ever want to reach the general (computer using) public, taking feedback about their image losing anyone but a certified geek might be valuable...
Re: (Score:1)
Their market audience is 3D artists who you would hope know what the word means - not ignorant geeks on Slashdot.
Re: (Score:1)
RTFGP. I was responding to a post about GNU and the FSF, whose audience should be the general public; but very often miss the mark in their communications.
Re: (Score:1)
Microsoft's non-macho name never hurt it.....unfortunately.
But Microsoft was really popular among men!
I'll believe it when I see it! (Score:5, Insightful)
They've advertised Linux support too, but I haven't heard anything from these guys. Unless they're like nVidia and sit around killing kittens all day, it would be a good idea for them to actually do some research and figure out how GLX and DRI work. Even the ATI closed-source drivers still respect the GLX way of life.
(nVidia replaces the entire DRI stack. DDX, GLX, DRI, DRM, all custom. fglrx doesn't replace GLX. Just in case you were wondering.)
A post I made elsewhere on the subject (Score:5, Insightful)
Like with anything, I call vaporware until they show real silicon. Not because I think they are lying, most companies don't. However there are plenty of overly ambitious companies out there. They think they have figured out some amazing way to leap ahead and get funding to start work... only to realize it's way harder than they believed.
A great example was the Elbrus E2K chip. Dunno if you remember that, it was back in 2000. A Russian group said they were going to make the Next Big Thing(tm) in processors. It'd kick the crap out of Intel. Well obviously this didn't come to pass. The reason wasn't that they were scammers, in fact Elbrus is a line of supercomputers made in Russia. The problem was they didn't know what they were doing with regards to this chip.
Their idea was more or less to put their Elbrus 3 supercomputer on to a chip... Ok fine but the things that you can do on that scale, don't always work on on the microscale. There are all sorts of new considerations. So while their thing was all nice in theory on a simulator, it was impossible to fab.
Intel and AMD aren't amazing because of the chips they design, they are amazing because they can then actually fab those chips economically. You can design something that'll smoke a Core i7 in simulations. However you probably can't make it a real chip.
This smells of the same sort of thing to me. Notice that they have press releases and some shiny demo pictures, but it was clearly done on a software simulator. Ok well shit, I can raytrace pretty pictures. That doesn't prove anything. Their card? Apparently not real yet, the picture of it is, well, just a raytrace.
So who knows? Maybe they really do have some amazing shit in the pipeline. Doesn't matter though, they've gotta make it real before it matters. nVidia releases pretty pictures too. Difference is the pictures of the cards are of actual cards, and the pictures rendered are done on the actual hardware.
I am just never impressed by sites heavy on the press releases and marketing, and light on the technical details, SDKs, engineering hardware pics, and so on.
one 'real silicon', coming up (Score:3, Informative)
http://www.youtube.com/watch?v=B3qtq27J_rQ [youtube.com]
( no, not a realdoll advert - it's a vid of their current test card being twirled around in a human's hands. then again, maybe they raytraced that )
Re: (Score:1)
Re: (Score:2)
I personally know some of the guys involved (splutterfish). If they say that's the real card. That's the real card.
The people behind this thing are relatively well known and respected names in the cg industry. They wouldn't be making these claims if it was a scam.
Re: (Score:3, Interesting)
Re: (Score:1)
nVidia's drivers have done their job for me, ATI's.. not so much.
Sure, theoretically, it's better to stick with the given architecture like ATI's drivers do. I get that. But what good does it do for anyone if it hardly works?
I'd rather replace half of X.org with nVidia's code if it means I get to use all my card's features.
Isn't enabling vendors to do that actually one of the things open source advocates keep preaching about?
Re: (Score:2, Troll)
But what good does it do for anyone if it hardly works?
What's broken about it?
nVidia's drivers have done their job for me...
ATi has *NEVER* had good drivers. They fucking suck at writing drivers. They always have, and -if trends continue- always will.
nVidia's rewrite of the majority of X.org graphics bits fails 'cause it's an ongoing *massive* duplication of effort. When the x.org folks put bugfixes or enhancements in to some component that nVidia has duplicated in their driver, everyone who depends on nVidia's software has to wait and see if nVidia will be arsed to fix their code. When everyone but nVidia
Re: (Score:1)
You feel the future of X.org is threatened by nVidia's policy? In what way?
In the first part of your post you seem to be saying it's the nVidia users who're out of luck whenever new X.org bugfixes/features aren't ported. In which case I don't see how they're holding back the project.
Then you go on saying that nVidia never helps out open source and basically accuse them of being damaging to X.org's health. Which I don't get. As I understand it, it's not X.org which depends on binary-only software, it's nVidi
Re: (Score:2)
First off, what's currently broken about the bits that nVidia has reimplemented?
In which case I don't see how they're holding back the project.
Meh. You're right I was overreaching. The only thing that I have to that _remotely_ supports my position is a post by Aaron Siego:
http://aseigo.blogspot.com/2008/09/on-kde4-performance.html [blogspot.com]
The money quote is here:
This isn't the only issue in x.org, but it sort of highlights one of the big ones: x.org has some pretty big issues when it comes to doing graphics. That's why nVidia includes in their driver a rewrite of pretty much every bit of x.org that touches graphics. This in turn causes havoc of a new variety: does nVidia's twinview map nicely to xrandr/xinerama or does it get screwed up? (Answer: often the latter.) Issues that get addressed in x.org need to also be fixed in the nVidia driver if they exist there too, and vice versa. It's just not pretty.
This is one of the primary reasons why I'm very excited about Gallium3D: it's a modern graphic stack done by graphics gurus that is designed for the real world of hardware. I've seen it action, and it's impressive.
If I understand correctly, nVidia was (and still is?) pouring a lot of effort into rewriting x.org features, then keeping the improvements to themselves. They could be better citizens and distribute their modifications
Re: (Score:1)
First off, what's currently broken about the bits that nVidia has reimplemented?
I don't know, I don't have any insight into X.org, and the issues it has concerning graphics. I've read somewhere its asynchronous nature is to blame, and that makes sense to me. But it's not something nVidia's drivers fix.
The post you quoted OTOH does mention there are issues that nVidia fixes with their approach.
I'm not pushing the "nVidia is evil" POV. I'm pushing the "Relying on closed-source components is foolish" POV.
Often it really is foolish, and one could be tempted to turn it into a principle. The FSF most likely shares your view.
Sticking to that principle is fine.
I'm just not that much of a black and w
Re: (Score:1)
Yep, sounds like every other Linux project to me.
Announce something cool
Ask the community to donate time
Sit back and watch Linux users bitch about no release.
Re: (Score:3, Funny)
He obviously likes kittens.
Something's not right (Score:1)
Also, 20% increase isn't much....really. With software simulators of new architecture, something between 10-20% increase in speed is
20 percent? try 20 times (Score:4, Informative)
You must have misread the article... it reads "20x", not "20%".
I.e. a 1900% increase. Or however one would put that. 20 times faster.. much easier. Still within the margin of error? :)
( also per the article, they're actually pondering 200x faster down the line. )
Re: (Score:2)
So if a quadcore with current gpu hardware runs at something around 16fps or so that would put us at 35fps, right about the absolute minimum of playability.
Re: (Score:2)
Re: (Score:2)
20x 16fps = 320fps, not 35.
20x performance + 16fps - 1 frame == 35 fps?
Maybe?
Re: (Score:2)
I was really tired and read that last line as getting 20 FRAMES extra.
Re: (Score:1)
You must have misread the article... it reads "20x", not "20%".
Yep, you're right :)
Unanswered questions (Score:4, Insightful)
I shall remain skeptical until more information is forthcoming.
Re: (Score:3, Informative)
performance: 20x speed-up ("from what" is unanswered at this time) to 200x speed-up down the line
limits: limited more by your machine than the card
dynamic scenes: it's an accelerator - if the renderer can, then it still can with this card
sorting (accelerations structure building, I think you mean?): wouldn't know but seeing as it's supposed to accelerate the ray tracing process, I would imagine it's either on the card or via their own algorithms in software
photon mapping/MLT/etc.: it's an accelerator. If t
Re: (Score:1, Funny)
I hate to burst your academic bubble, but MLT has approximately zero use to any production-quality renderer.
MLT (Score:2)
I only mentioned it for the sake of completeness. I've never tried implementing it myself for my own projects, and don't plan to. However, I understand that it converges faster than photon mapping for some scenes lit by light sources that are mostly occluded, like light from underneath the crack of a door. In the photon mapping scenario, few of the photons would contribute to the final image.
Movie studios and the like may not care about this, as they can just manually position their lights so this isn'
Re: (Score:2)
I thought that it was, but I can't find find any reference to back that up, so maybe you're right.
Re: (Score:3, Informative)
You're asking a couple of incorrect questions.
This isn't a renderer. This is a render accellerator.
The idea is that Brazil, Mental Ray, Vray and FR can use this to accellerate the existing renderers without any sacrifice of quality or features.
Think of it like SSE3. It's a new instruction set you can use to accellerate your software. It's not a hardware renderer. It's a hardware ray tracer. The distinction is subtle but in important in this case.
It should also be noted that Splutterfish (the makers of B
my guess based on prior art (Score:2)
Sounds like a plain old accuracy vs. time trade off. For a pixel in a given frame they choose some reflected/refracted rays to follow. They add noise or dither to their ray selection process so over time a pixel will converge to a nearly correct value. Moving items won't get an exact solution right away but they're moving so the viewer won't notice that the shadow isn't quite dark enough immediately or something in the mirror got a little jaggy for 3 frames.
In most games, the viewer moves more than objec
Is it an artificial distinction? (Score:3, Informative)
Re: (Score:1)
I don't see how - surely, ray tracing involves shooting out rays for each pixel to see what objects they hit, and then tracing out additional rays in turn from that point. Current methods involve drawing the objects directly and seeing which pixels they fill. I don't see how doing the latter with polygons smaller than one pixel makes it like ray-tracing, anymore so than any other per-pixel level method such as texture mapping.
A better example for merging of algorithms would be displacement mapping [wikipedia.org], which ca
Re: (Score:2)
The basic difference in pseudo code:
for (i in polygons):
for (j in raster positions):
if (ray from j hits i):
draw pixel at j
for (j in raster positions):
for (i in polygons):
if (ray from j hits i):
draw pixel at j
Of course this is a huge simplification. Both rasterizers and ray-tracers optimize their inner loops, the f
No video, no pictures. It smells like hoax. (Score:4, Insightful)
For something as ambitious as they have, it's very strange that their web site has no demos, absolutely nothing, of their products. No pictures, no videos, nothing.
Re: (Score:2, Funny)
http://en.wikipedia.org/wiki/BitBoys_Oy [wikipedia.org]
zero product, some IP, waiting to get wads of money and run away with it
So DNF is released soon? (Score:2)
Finally, 3DRealms can release DNF...it will only work with Caustic graphics cards, but it will have the absolutely bestest graphics this side of a Phantom console.
Re: (Score:2)
Running HURD, of course.
Re: (Score:2)
Patent (Score:1)
Re: (Score:1)
"Most patent applications filed on or after November 29, 2000, will be published 18 months after the filing date of the application.... Otherwise, all patent applications are maintained in the strictest confidence until the patent is issued or the application is published."
This means the application is not available to anyone during that period (unless the application is issued earlier and thus becomes public).
I try and point this out every time (Score:1)
Now obviously there are instances where raytracing helps, reflections and refractions can be generated on a per-pixel bases rather than rendering the reflection/refraction as a separate image and stretching/squishing said images in order to produce a similar effect. But saying this, if you render these separate im
Re: (Score:3, Interesting)
To raytrace a soft shadow you have to send out at least 16 rays per shadow calculation, for each light and even then your gonna suffer from nasty artefacts. Compared to the raster solution which involves rendering the zbuffer of any given light source and merely doing some blurring. same quality, much reduced cost.
It seems to me that the algorithmic complexity grows just as fast for both rendering techniques in the case of many lightsources. Both are accomplished in steps linear to the number of lights.
Its all well and good that rasterization is "fast" for what we use it for today. But, its growth is linear to the number of primitives while there are other methods that are sublinear. For a large enough number of primitives the sublinear algorithm must be superior in performance.
Re: (Score:2)
But, its growth is linear to the number of primitives
Is that much of a problem when the hardware gets faster at exponential rates?
The whole problem I have with all this raytracing buzz is that so far, it hasn't produced even a single game or realistic tech demo (no, Quake4 with shiny spheres added doesn't cut it).
Todays games haven't been plain rasterizers for a long time, thanks to shaders and all the post processing they allow, yet when raytracing and rasterization is compared always the most basic form of the algorithms is compared and not what is used in
Re: (Score:2)
All other things being equal, doubling the computing power of a raster card will net you double the number of primitives.
All other things being equal, doubling the computing power of a raytracer card will net you the square of the number of primitives.
As soon as these two technologies are on par with each other, rasterization dies on the following hardware generation.
If a raytracer can handle 1 million objects in realtime, then it only takes a 5% computational performance improvem
Re: (Score:1)
Re: (Score:1)
Redundant? (Score:2)
Doesn't OpenGL 3 support real time raytracing already if you feed it enough hardware? Or did that not materialize in the final spec?
Re: (Score:3, Funny)
Didn't OpenGL always support real-time raytracing if you throw it enough hardware? Unfortunately "enough hardware" to render complex scenes in real time has not existed yet.
Can you magine what kind of 3D modeling rig god has? Somehow I don't think it's based on an ATI or nVidia chipset. ;)
Join the... (Score:2)
...massive list of failed graphics companies trying to do something novel in the last 10 years...
Seriously, can anyone name a single company that has made inroads into the nVidia/ATI duopoly? I can probably name a half dozen who have tried...
Never trust a company that ... (Score:1)
Never trust a company that puts its name into just about each of its products. That is just lame, and there is no reason the product should not turn out to be just as lame. With attitude like that, there seems to be a lot of immature pride in that startup. They have probably hit gold in some calculations/algorithm and rushed to announce it will change the world. The truth is probably much more modest - they do have some technology or IP to offer, but it will require a lot of effort and hard work to make a d
Not fair (Score:2)
I promised this 10 years ago, and where is my press? Pfft.
Science by press conference (Score:1)
I like how there are no demos, screenshots, pictures, etc. Just words.
Haven't we seen this before? Like, we totally discovered cold fusion in 1989. It was announced as true, so it must be!
So this year we'll have fully-raytraced high-def images at 30-60fps. Obviously it'll happen. They told me so.
What once was old is new again (Score:2)
Re:Big deal. (Score:5, Informative)
Juggler was very impressive for the time, but it was "only" real time high-color-depth animation playback (although even the compression method used was probably impressive back then). It was not real-time raytracing. Yes, Amigas were famously one of the first computers that made raytracing possible for home (or even pro movie/TV) users back then, but I remember that rendering a simple raytraced scene (a couple of primitives) in apps like Imagine 3D would have to run for a few hours, if not overnight. That might have been on an Amiga 1200, rather than my older 500, too.
Re: (Score:1)
Re: (Score:2)
Yeah, I hear some people still collect pocket watches ;)
Re: (Score:2)
Yes, the A1200 and A500 systems didn't come with floating point hardware by default, so they were pretty slow at rendering with things like Imagine and Lightwave..
The A3000 and some A4000s came with FPU hardware, these higher end Amigas were basically targeted at people wanting to do 3d rendering on them, the Amiga had quite a niche in this market for a while.
The speed difference between an A1200 with 14mhz 68020 and an A4000 with 50mhz 68060 is pretty massive.
Re: (Score:3, Insightful)
Even my MSX computer did real time raytracing like a champ, providing that all the pixels were produced from a 2D non-reflective surface using a 90 degree angle. Of course you had a limited color space, but otherwise everything run just smoothly.
Kidding aside, I suppose it's how far you want to take it. Amiga or MSX are not interesting anymore for about 90% of the things they did. The one exception is probably playing retro games.
Re: (Score:3, Informative)
Yeah. _Not_ in real time. I admit the article is confusing, but that Amiga anim was not done in real time.
The rendered images were encoded in the Amiga's HAM display mode and then assembled into a single data file using a lossless delta compression scheme similar to the method that would later be adopted as the standard in the Amiga's ANIM file format.
Re:cant wait. (Score:5, Funny)
I know! This is totally going to solve the problem of the utter lack of glass spheres and infinite checkerboards in today's games!
Re:I know their secret! (Score:5, Informative)
That's how just about all ray-tracers work. The problem is when you want to avoid aliasing effects. The easiest solution is to use multi-sampling, but having a nice square grid of primary rays per pixel still creates some aliasing effect. Randomizing the directions of these rays using a statistical distribution is one way of improving things. But then, at every reflection and refraction the secondary rays converge and diverge even further, so they will not all hit the same triangle/object/texture which causes all sorts of texture caching problems.
This company seems to have found a solution with their "incoherent ray" solution.
Re: (Score:2, Informative)
Nice try, though.
Re: (Score:1)