The Future of Real-Time Graphics 254
Bender writes "This article lays out nicely the current state of real-time computer graphics. It explains how movie-class CG will soon be possible on relatively inexpensive consumer graphics cards, considers the new 'datatype rich' graphics chips (R300 and NV30), and provides a context for the debate shaping up over high-level shading languages in OpenGL 2.0 and DirectX 9. Worth reading if you want to know where real-time graphics is heading."
I just want a realtime graphic overlay (Score:1)
So we all have to have portable gargoyle type computer rigs on..
FP?
Puff
Yesterday (Score:4, Insightful)
Re:Yesterday (Score:5, Funny)
Step 2: Fully computer generated and scripted soap-opera. This may have already happened, and the public would be none the wiser.
Step 3: Fully computer generated, scripted, and supported news broadcast with physical evidence fabricated by "unnamed agencies" as needed. This may have already happened, and the public would be none the wiser.
Step 4: Greetings citizen. Please place your tongue on the screen for clearance level verification.
Just thinking about it makes me ill. (Score:2, Interesting)
It's already at the point where the graphics department in a game house can be larger then the development team.
Re:Just thinking about it makes me ill. (Score:4, Interesting)
I can only speak for consoles, but there have been some interesting developments over the last five years or so...
1. Knowledge prerequisite for engine development has gone up, not down, as was previously thought (hoped?) Some people had thought that with the latest generation of h/w (XBox, PS2, Gamecube) that more programmers would be able to work on the graphics-end (XBox because of DirectX -- PS2 well, because they didn't know any better). But just like on the previous generation of h/w although we don't have to do some of the lower level tasks anymore (s/w render, perspective correction, blah blah) more complex tasks are required for the latest games. I think everyone's hoping (again) that this will change in the -next- generation (e.g. send 3DMax/Maya file to hardware! yeah, right.). maybe. we'll see...
2. The ever-increasing (and always lamented) trend of h/w shy programmers has (maybe?) kept the graphics engine teams small. It still is very common to have one or two man teams building the engine. For example, we have two engine programmers (working on different engines on different platforms) and about 25 on titles. Based on other companies I've been at or seen, this isn't really unusual. If you meant artists (by "graphics department") then yes, there is clearly a trend for having more artists than any other role.
3. Game teams are not oblivious to the severe lack of quality gameplay. Publishers aren't either (really!)
4. Unfortunately, the idea of "game designer" as a profession (outside of a few notable individuals) has been historically ridiculed. It's been ranked with "tester" and "your mom" as far as development teams were/are concerned. However, even though only a small percentage of development houses (still!) recognize "game designer" as a legitimite role, one of the most promising trends has been (perhaps out of necessity?) the steady increase in them. -That- is good news. Basically, more places have someone in charge of "fun."
It'll get better. Probably.
Re:Just thinking about it makes me ill. (Score:2)
How else do you ensure original, innovative gameplay?
You need a team of game designers formulating ideas for how the game should play, and then executing those ideas.
Programmers and artists are just the vehicles by which the designer achieves his vision.
Re:Just thinking about it makes me ill. (Score:3, Interesting)
Re:Just thinking about it makes me ill. (Score:2)
just hope that PS3 has OpenGL 2.0 clean (Score:2)
it does not have to have the legacy stuff that a full OpenGL 2.0 has to have just all the cleaned up interfaces
now that would be cool
reddering stuff on my playstation !
regards
John Jones
You want HDTV (Score:2)
Until we get higher res renderings I am sorry but 480 lines of detail is still to little
Except 480 lines of detail is all an NTSC television can do. (NTSC scans 525 lines 30 times a second, and roughly 45 of those lines are the vblank synchronization signals, closed captions, etc.) If you want more, you're going to have to buy a television with an HDTV tuner. The TV makers are fighting the FCC's mandate to bundle HDTV tuners. Or you can use a computer display with your console, but by then, if you've somehow plugged a VGA monitor into an XBox console, you might as well play PC games. You're just going to have to grin and bear the fact that you're not going to get Miyamoto and HDTV on the same console in the immediate future.
Besides, what's wrong with 240x160 pixels [nintendo.com] again?
They forgot to mention... (Score:2)
That the comparison do Pixar-style in real time might be inaccurate...especially when considering the 1.2 million computer hours. How does the equation change when you consider the resolution used for each animated frame (of Toy Story), and resolutions that are common in the gaming arena?
Re:They forgot to mention... (Score:3, Informative)
Not a lot.
Toy Story was rendered at approx 1536 x 922, only 8% more pixels than 1280 x 1024.
Re:They forgot to mention... (Score:2)
Re:They forgot to mention... (Score:2)
Wow...I had no idea that the resolution was so low.
I'd like to thank everyone for the awesome responses.
Re:They forgot to mention... (Score:2)
It was Sun SparcStations 20 (117) and a SPARCserver 1000. You can read more from SUN's press release:
Disney's "Toy Story" Uses More Than 100 Sun Workstations to Render Images for First All-Computer-Based Movie [sun.com]
Besides, people in CG know that Tom Duff is an authority and knows what he is talking about. There are many reasons why realtime CG won't do Toy Story class rendering in the near future. Have to consider the hype from the graphic card manufacturers. The demos and presentations at SIGGRAPH are neat and impressive but film CG has a lot of requirements. The same thing was mentioned when both the PS2 and GSCube were announced and we still haven't seen a Toy Story type rendering in real time.
Here are a few more threads from the RenderMan newsgroup regarding the matter:
What does GSCube do? [google.com]
Playstation 2 and "Toy Story" [google.com]
Real-Time RenderMan? [google.com]
PS2 (Score:3, Interesting)
Silly people.
Re:PS2 (Score:2)
Re:PS2 (Score:2)
Sony's graphics are definately comparable to hollywood's:
Compared to movies, PS2 graphics are crap.
Erik
Re:PS2 (Score:2)
And until everything my computer generates realtime looks like true organic material there will be this gap between realtime and rendered.
It's all just a crude hack (Score:3, Interesting)
tessellation / LOD (Score:2)
The number of vertices in the objects are relative to 'your' distance from the object in a kinde of half-way house between object based and polygon based rendering, a bit like mip-mapping with textures.
OpenGL 2.0 versus DirectX9 (Score:5, Insightful)
DirectX9 is really about games - render less polygons on the fly, and it only works with one operating system (any guesses on which one?).
As games and movies start to approach each other in graphical abilities, I wonder if OpenGL will become more important as the Unix graphics programmers start getting pulled from their Toy Story 3 seats to help with the guys making Toy Story 3: The Game.
Right now, the #1 reason why OpenGL is still in a good number of Windows machines is John Carmack. Will things change? Maybe, maybe not. But I still wonder about the future.
how about OPEN GL Interface? (Score:2)
Dont you think we need openGL interfaces by now?
Re:how about OPEN GL Interface? (Score:2)
Re:OpenGL 2.0 versus DirectX9 (Score:4, Informative)
Actually, SGI is the standard and has been for a while. Linux is becoming one and OSX probably never will be. And yes these machines do crash alot. Speaking from experience as someone who was one of the sys admin's of a render farm at an effects house, our 8 and 16 proc. Origin 2k's went down alot. Not because the OS was messed up or we were bad admins, but because we pushed them really, really hard. I used to replace disks in arrays and on systems at least once a week. The workstations used to puke all the time. Why? NFS, SAN, memory, that crazy Maya script someone wrote, etc. etc. Any number of reasons.
Unix is great and it gets the job done better over all, but keep in mind how hard these guys/gals are pushing this equipment. Linux on a pumped x86 is the next workstation of the movie industry as they look to cut costs, there's a lot of reasons that film studios and FX houses are looking at Linux, cost being probably the most important. Keeping the per shot/frame cost down of matte painting and animation is really the key for the accountants and producers at these places.
Remember, most CG work is stuff you don't notice in the movies, like color correction, background things like snow or rain, lighting effects, etc. Stuff like LoTR, tho more common these days, isn't the bread and butter of FX houses. It's the utility work that gets these guys through the times between the big projects.
Side note: I don't dislike OSX, however to say it's going to be the "film" standard is probably not correct. OSX with Maya will find it's way in to ad agencies and boutique design houses who are very mac centric, but the hardware cost is high overall and the software selection is poor. Many might argue with me, but the big fx houses are very invested in SGI hardware right now and are starting to put in Linux based hardware to take advantage of the costs.
-s
Re:OpenGL 2.0 versus DirectX9 (Score:2, Insightful)
Noone wants to be the company that is mentioned in a review like this:
"Well, we tried to test this machine against Quake3. Sadly, it only supports DirectX 9 with the default setup. I suppose you'll have to buy this rig plus a graphics card if you're a gamer..."
It's marketing suicide. And we all know that marketing must have the features they want, even if rival specifications have more technical merit.
Really? (Score:2)
Tell me one feature in DirectX 8.1 that isn't supported in the nVidia OpenGL 1.3 drivers?
none? It's because OpenGL was designed to be stable but extendible. It's sort of like the x86 instruction set which still contains BCD instructions from what the 4004?
They haven't had to go to a version 2.0 because it's easy to retain backward compatibility yet keep up with the newest cards with the extension mechanism. Microsoft started with such a brain dead design that they have needed several complete rewrites. Remember OpenGL went through all those early years as IrixGL a closed beasty like DirectX.
Version 2.0 depends on hardware that does not yet exist, requiring if-then-elses to be allowed in fragment programs. It includes a compiler for the vertex and fragment programs, which have equally powerful instruction sets. DirectX exists because Microsoft marketed heavily to game writers at the right time and simplified other things like joystick and sound control. To unless you have your own in house libraries for those you pretty much have to use DirectX for a Windows game, whether you use it for graphics or not.
Before you all get excited.... (Score:4, Insightful)
But this is all going to be great fun for gaming, VR, simulation, and so on.
Bruce
Re:Before you all get excited.... (Score:2)
I usually think about a related area, music.
There's a big tie between music and mathmatics.
But yet the free music scene still seems empty.
Hrm...
Perhaps there just needs to be some reason to want to create.
This said by a guy who left violin after playing 18 years, to go do Computer Science. You could argue that I saw I didn't have what it took to make it, but I wonder how many others there are like me.
Re:Before you all get excited.... (Score:2)
It is sad as well to think that most of the music community is "half illiterate". They can read, but can't write. How many instrument performance people actively work on honing their composition skills?
At least these skill could help them adjust to automated music some.
Re:Before you all get excited.... (Score:2)
Re:Before you all get excited.... (Score:2)
You took the words out of my mouth. No amount of increase in rendering speed (and it's only a matter of time before a home PC can do high quality rendering in a sensible timeframe) will make you a movie. To do that you need both sufficient modelling skills to make your generated images and characters look good, and the ability to write a decent script. Neither of those are much affected by Moore's law (although tools to assist with modelling may make some progress here).
the other side of the coin (Score:4, Interesting)
Or play a realtime version of Final Fantasy: The Spirits Within, but walk around the "set" in realtime with the characters or just keep the camera focused on Aki's bizznoobies.
Re:the other side of the coin (Score:2)
I think that's never going to happen. Even when realtime graphics cards are capable of producing trillions of polygons per frame and are able to acurately render Aki and all the other digital characters in realtime, it takes a lot more than 1 render to create every frame in a movie like Final Fantasy. It takes painstaking compositing and color correcting of hundreds of layers to create the final image that you see in the theatre and I think it will be way too complicated and time consuming to make an image look good not only in one frame or sequence but in a full 3d environment that the viewer can rotate and "walk" around in. The only way I see that we will be able to walk around in photorealistic digital sets with unrestrained motion is when the graphics processor can handle real-time raytracing as well as global illumination , sub surface light scattering, and on top of that act exactly like a real camera lens, or a real eye - which is what compositors try to "fake" and immitate on every digital frame. I don't see that for a really long time and I doubt it will even be worthwhile, since I don't pay to see the movie that I can dream up, but to see the director's vision of the story. It's the difference between a Hemmingway novel and a choose-your-own-adventure.
Re:Before you all get excited.... (Score:2)
(Or, they spent so much on equipment that they have no promotional budget left.)
What Hollywood Really Fears ... (Score:5, Interesting)
Suddenly we don't need studios, we don't need actors, and we don't need tens or hundreds of millions to produce a blockbuster movie. And with the internet to distribute the material on, we don't need their distribution network of cinemas either.
The most important talent they rely on is not skill in computer imagery, but skill in telling a compelling story using all of the tools of the visual idiom. This is what most people don't have, and it is an essential element to producing good film.
Like musicians using home-studios to record music, without talent this will go largely unusued, or, more likely, there will be a lot of less-than-good material out there
Musicians really don't need million dollar studios anymore to produce an album, and while this means a lot of junk is pressed onto CD, it also means a lot of musicians are able to produce and market their music outside of the RIAA's cartel, through mp3.com and elsewhere. Hollywood doesn't fear the napstersization of their medium nearly as much as they fear the mp3.com-ization of it, and competition with a few thousand talented people not on their payroll.
This, I think, is why we are experiencing such an onslought of attempts through legislation and back door regulation via the FCC [fcc.gov] and a little known "standards" body called the BPDG to take both the internet, and general computers, out of the hands of private citizens.
It isn't about 'piracy,' it is about competition, and they don't fear competition from 'everybody' so much as they fear general access to the tools, which means those talented persons not a part of the cartels would be able to compete for viewership and marketshare on a level playing field with the big studios.
And that is something they simply cannot abide.
correction (Score:2)
Even if only 1 in 100,000 has the story telling talent to put together a good film, that would amount to 2,800 potent competitors to the media cartels.
should have read:
Even if only 1 in 100,000 has the story telling talent to put together a good film, that would amount to 2,800 potent competitors to the media cartels in the U.S. alone.
Re:What Hollywood Really Fears ... (Score:2)
Unfortunately, MP3.com hasn't produced any great victories over the RIAA. Like it or not people are generaly too cheap to pay for anything that they can get for free. If you don't believe me look back on all of the "hell no I won't pay..." posts that were posted right here on
The sad fact is that most people are too self centered and short sited to pay money unless forced to do so.
Re:What Hollywood Really Fears ... (Score:2)
And yet, I own several CDs of artists I discovered, and purchased, through mp3.com.
Even more interesting, the Free Blender Campaign just passed the 80,000 mark, or 80% of the amount needed to purchase and GPL the source code from the Blender holding company.
Clearly people are willing to pay, when they see a benefit, indeed, even when they can get things for free. With Mandrake, many didn't see a benefit (though even there, many others did).
The sad fact is that most people are too self centered and short sited to pay money unless forced to do so.
There are other solutions. The fundamental design of the internet, for example, shares the cost of propogating information between the sender and the receiver. In other words, the basic design of TCP/IP is P2P in nature. Unfortunately, the HTTP protocol was designed in a client-server manner, placing the bulk of the cost on the providor and making the cost not scale gracefully as demand rises. Ditto for ftp.
Contrast that with SMTP and even NNTP, as well as FreeNet. The "Free Speech" aspect of the internet depends in no small part on the "Free Beer" aspect of the internet, or, more correctly, on the balance of cost shared between sender and receiver (i.e. if "free speech" is expensive, only the wealthy will have freedom of speech, and the value of that freedom will become negligable to the common person).
Once FreeNet, or another P2P application level infrastructure is in place, with solid search capabilities and HTML-like facilities, we may be able to return to a state of affairs where costs are shared naturally, and popular sites like slashdot are no longer incredibly expensive to run because bandwidth costs are shared and distributed across the entire network, among all those who read the content equally.
This is why P2P is so important, and must be preserved from the depradations of the Copyright Cartels. Not for inane, juvinile file trading, but to fix the bottlenecks of the internet and to keep the medium free and accessible for all to use, regardless of wealth.
Re:What Hollywood Really Fears ... (Score:2)
And you are the exception. I and all of my friends own none.
"Even more interesting, the Free Blender Campaign just passed the 80,000 mark, or 80% of the amount needed to purchase and GPL the source code from the Blender holding company."
Interesting that you mention Free Blender. Free Blender is the 3D rendering program that was owned by a company that went bankrupt due to the very attitudes that I talked about. I hope that Free Blender does go GPL so that we don't loose this fine product but it's not been a great success proving that voluntary contributions work. Conversely, it is a good example showing that what I have said is true.
"Clearly people are willing to pay, when they see a benefit, indeed, even when they can get things for free. With Mandrake, many didn't see a benefit (though even there, many others did)."
No, its not "clear" at all. What is clear is that most are unwilling to pay. If you read the posts that I was talking about you'll see that "seeing the benefit" had nothing to do with not paying. People just didn't care to pay. More than one poster said something to the effect of "I use Mandrake but I won't pay for it as long as I can get it for free."
Obviously they found Mandrake useful, the fact that they use it proves that but they had the mentality of a blood sucking leaches.
"There are other solutions. The fundamental design of the internet, for example, shares the cost of propogating information between the sender and the receiver. In other words, the basic design of TCP/IP is P2P in nature. Unfortunately, the HTTP protocol was designed in a client-server manner, placing the bulk of the cost on the providor and making the cost not scale gracefully as demand rises."
I work in a small shop where I am a programming/network specialist so I understand all about TCP/IP, P2P and HTTP. When I was in college they told us that there were two ways to get through school. 1. Dazzle them with data or 2. Buffalo them with bullshit.
No offense but your statement didn't dazzle me. It's rather a non-statement. Yes, HTTP is used by web servers (Apache, IIS, etc) to communicate with web clients (Modzilla, Netscape, etc) but it is the program itself that determines if it is a P2P application, not the protocol that it uses. You could for example create a P2P program that used HTTP to communicate. Not very efficient but it would still be P2P.
As far as cost sharing goes, I don't think that your statement makes any sense. Sure, it costs companies money to do business on the internet. The cost of the physical servers, the software (If they are using Microsoft anyway.) and the cost of system administrators, and employees to provide content. But what is the alternative and why would it be better? The fact that companies spend a significant about of money to do business on the internet does not effect me unless they pass the expense onto me. However, I have found that the internet makes shopping around for the best buy very easy thus driving down the prices. Would you suggest that we go exclusively P2P? That may be fine for MP3's (If you ignore the obvious problems of having the artists paid for their efforts.) but what about products that simply cannot be transmitted over the internet. Amazon.com for example sells books etc. Web servers/clients meet my needs quite nicely in these cases, thank you very much.
What do you mean by your statement that costs don't "scale gracefully as demand rises?" The statement does not make sense. Yes, if a company needs greater bandwidth it will cost them more. So what? Again, P2P is not going to get me a new VCR from the electronics store of my choice. Web server/client technology is well suited for the task.
"Contrast that with SMTP and even NNTP, as well as FreeNet. The "Free Speech" aspect of the internet depends in no small part on the "Free Beer" aspect of the internet, or, more correctly, on the balance of cost shared between sender and receiver (i.e. if "free speech" is expensive, only the wealthy will have freedom of speech, and the value of that freedom will become negligable to the common person)."
You're going way off base with these statements. What does a mail protocol have to do with P2P? It is squarely in the realm of client server technology as are news servers. Free speech is not expensive and I don't think it wouldn't be any cheaper if we adopted your suggestions. Although I'm unclear on exactly what it is you are suggesting.
"Once FreeNet, or another P2P application level infrastructure is in place, with solid search capabilities and HTML-like facilities, we may be able to return to a state of affairs where costs are shared naturally, and popular sites like slashdot are no longer incredibly expensive to run because bandwidth costs are shared and distributed across the entire network, among all those who read the content equally."
Popular sites like slashdot wouldn't exist in a totally P2P world and the cost to the individual would rise.
Remember, the costs don't disappear, they get redistributed. If companies like slash.dot and Amazon.com aren't paying the share that they now pay the lost revenues would have to be made up by you and me.
"This is why P2P is so important, and must be preserved from the depradations of the Copyright Cartels. Not for inane, juvinile file trading, but to fix the bottlenecks of the internet and to keep the medium free and accessible for all to use, regardless of wealth."
P2P has it's place as do the client/server technologies and no doubt the copyright holder's don't like having their IP stolen. But you haven't convinced me or even communicated to me a better system than we have now.
Re:Before you all get excited.... (Score:2)
Bruce Perens wisely said:
Yes, talent is required. But consider this: there are lots of talented people out there in the performing arts -- writers, actors, dancers, directors, etc. Dropping the cost of means of production will allow people with such talents to create productions without going through the current production model. Making it easier to do things with computers will have a similar effect. That's where the real impact will be. Not with geeks with swelled heads.
Re:Before you all get excited.... (Score:2)
That doesn't mean everybody will have the talent to make excellent movies, but it may expand the pool of people able to do it by several orders of magnitude.
Besides, many commercial movies are apparently made by people who don't have a clue how to tell a good story.
Re:Before you all get excited.... (Score:2)
And I realize that what I and other people can do is at most to make "Phantom Edits" of current movies. The making of a complete movie is quite another issue. But the state of the art when it comes to story telling these days are not really what I'd classify as impressive.
I would think that the hardest part would be modelling and animation. There are quite a lot of good stories made by amateurs. (That is, short stories on paper.)
Re:Before you all get excited.... (Score:2)
The Error in Bruce's Assumption (Score:2)
The error in Bruce's assumption is the notion that everyone who has talent, or even those most talented, are already working for the studios. In other words, that our society is already benefiting from all of the talent out there through the existing media cartels, and that this tool therefor isn't going to add anything of significance to our culture, at least in the area of film making.
This simply ins't true. For every artist who claws their way into the cartel through talent, dumb luck, or, most often, connections with those already on the inside, there are hundreds, perhaps thousands, of equally talented people who never make it and are never heard or seen.
Making these tools generally available won't mean everyone is suddenly a Gilliam or a Spielberg, but it will mean that many of those thousands of talented people whose work we never see, indeed most often is never created, will be seen, will be available, and will be able to compete against the offerings of the studios themselves, a significant portion of which I might point out suck as badly as any amateur material I've seen.
Who cares if this means a million people produce crap I'd never want to see. If it means 10 people (or, more likely, a couple of thousand) produce good, interesting, innovative material, then our society and our culture have experienced a windfall in artistic work.
There is also commercial opportunity there (even if all the artists were to release their stuff under Free Licenses of one sort or another, something which I suspect some would do but many would not), for someone to review such works and help those interested find the wheat among the chaff.
This assumes, of course, that the common person is allowed to have an unfettered, general purpose computer or a bidirectional internet connection, something which those very same cartels are actively trying to prevent.
no you are (Score:2)
I am a dyslexic [google.com]
but since I cant spell I put how I would phonetically spell dyslexic and that would be deltic sorry if you might have to think for that one
yes he was talking about script and so on but the story wasnt about that, it was about rendering power and pixar saying that all the UltraSPARCs they have not being overturned by a graphics card
on top of this its never wise to say never
its like saying that monkeys will never reproduce the works of shakespeare (-;
regards
john jones
This makes amateur 3d movies more possible (Score:3, Interesting)
With the internet and a DVD Burner, a low budget film could be distributed DIRECTLY on DVD. Now the films just need to get good enough that people will want them. This would be a good direction for both music and movies.
Cool eye picture. How the heck do you make a model for that?
Brian
To paraphrase Jim Blinn (Score:3, Insightful)
I mean, come on people, it's apples and oranges here. Two similar tools for two VERY different purposes. Rendering 80 FPS at 1600x1200 makes good games, but I doubt there will ever be a day when film frames are rendered in real-time. There's just no reason to!
That's not to say that yesterday's movies couldn't be rendered on today's technology. Absolutely! But tommorow's movies are a different story.
Re:To paraphrase Jim Blinn (Score:2)
Reason 1: To play games with film-quality graphics (or near enough as makes no difference).
Reason 2: Great looking real-time architectural walkthroughs, CAD renderings and so on.
Reason 3: To render 100 or more times as many frames per unit time in the film render farm.
All of those reasons seem pretty compelling to me!
I suspect you'll be changing your tune when you see what the R300 and NV30 can do... :-)
Re:To paraphrase Jim Blinn (Score:2)
The animators may render in reduced resolution in real-time for the reasons you mentioned, but if it renders in real-time, they'll add more effects/higher resolution/sub-pixel tesselation/whatever, just because they can, for that tiny little bump in quality.
Right now the biggest barrier is faces: CG faces (plus the hair) are a heck of a lot better than they used to be, but they're years away from photographic quality when animated.
Bryan
what then? (Score:2)
i see pixar's point that by the time pc graphics catch up to now, cinematic cg will be further down the road. but the gap is getting smaller. i would be really impressed if a desktop could make use of something like MASSIVE [sgi.com] in real time during a game. good looks will look better when there is more intelligent behaviour of bots in games. someday it will be possible, but by that time movies will be even prettier.
or maybe i'll finally be able to game at full speed without worrying about ruining the cds i'm burning.
Compression algorithm at its best (Score:2, Informative)
The problem with this is that a company will never believe that such a miniaturization rate is possible.
They prefer to think of that as "silly"
("silly" == "you are trying to rip them off")
The proper way to do this is what most companies that make RAID servers and other computers with large hardware arrays are doing:
1: Build large motherboard with low circut density
2: Build case roughly the size of small wardrobe. 3: Attach motherboard inside case, and fit parts to motherboard.
4: Put lots of flashy lights on case, contemplate adding machine which goes 'bing'.
5: Use lots of external connections, so that there is lots of assembly required and there are lots of bells, whisles, cords, and dongles hanging from the case after assembly.
6: Next year, make the case about 2" smaller in each direction and raise the price about $250 per unit
You think I'm kidding - our school's grade server has less than half the volume of the one that was replaced, and they're not that different in performance.
Remember - size may not matter if you know how to use it, but it may stop you from getting the chance to use it in the first place.
Finally, my computer will make "graphics" (Score:2, Funny)
TRON.
I can't wait.
Sure, but what are we losing (Score:2, Interesting)
And then we'll all drive spaceships to work (Score:2, Insightful)
Re:And then we'll all drive spaceships to work (Score:2, Insightful)
Now, I don't say you'll get to that point in the next years, but I think eventually this point might be reached.
Realtime raytracing is the future (Score:5, Informative)
Basically, the explanation is that rasterization takes a time proportional to the number of polygons to render a frame, while raytracing takes time proportional to the log of the number of polygons. That might make you think raytracing should be always faster, which it clearly isn't -- the reason it isn't is that the constant factor in each is very different. So you have a*N vs b*log(N), where b is much bigger than a. As N gets bigger (apparently in the neighborhood of 10 million polygons), the difference between a and b becomes less important than the difference between N and log(N).
The main benefits of raytracing over rasterizing is that it is very easy to get things like reflections, shadows, refraction, and other important effects with raytracing, but with rasterizing, you need to do a lot of complicate and CPU-intensive hacks to get the same effect. Another benefit is that raytracing is parallelizable while rasterization generally isn't. That means that if you have a raytracing accelerator card in your PC, you can nearly double the frame rate or resolution by adding a second raytracing card.
Of course, it's all a chicken-and-egg sort of thing, nobody's going to buy a raytracing card until it's a cheap way to do the rendering they want, and it won't be available unless there is a market. Fortunately, there is research [uiuc.edu] into using the next generation of rasterizing graphics cards to greatly speed up raytracing. This will help bridge the gap, by making raytraced games possible using soon-to-be-existing hardware.
Re:Realtime raytracing is the future (Score:2)
I don't know if that has any application to realtime raytracing for games, but it's neat anyhow.
Re:Realtime raytracing is the future (Score:5, Insightful)
Not true... in fact, the opposite is the truth. Rasterization - which I assume is used here to refer to zbuffering - is very highly parallelizable, while raytracing is only moderately so. The fundamental reason for this is that zbuffering needs only serial access to the database of polygons, while raytracing needs unpredictable access to the polygons depending on where rays are reflected and where lights cast shadows. This means that raytracing is subject to a high degree of unpredictable memory needs and disk accesses, whereas zbuffering can be nicely predicted, pipelined, and parallelized. This is why hardware implementations of zbuffering are a dime a dozen, while the only hardware implementations I've seen of raytracing parallelizes only one very tiny portion of the rendering pipeline and has all kinds of problems that as yet are not addressed sufficiently for practical needs.
.
Re:Realtime raytracing is the future (Score:2)
Except for that little problem with alpha-blending...
Re:Realtime raytracing is the future (Score:2, Insightful)
Raytracing is quite parallelizable in macroscopic sense (shared source data). Each pixel carries no dependancies on any other. But one problem is the need for a large cache of intermediary data during processing for a single pixel, which penalizes multiple rendering paths; they would each require as much memory. I imagine a raytracing chip will not evaluate the pixels in the image but speed up common operations (color mixing, vector operations, ray intersections, texture calc. etc.)
Finally, raytracing opens up new avenues for modelling and realizing objects as polygon counts skyrocket: it allows one to rasterize non-triangular objects whose intersections and normals are easily formulated for example quadrics, planes, spheres and simple NURBs.
Re:Realtime raytracing is the future (Score:3, Informative)
The gentleman went on to describe Pixar's work on a new renderer which (if I heard correctly) goes to the extreme of keeping no scene data permanently in RAM, and just streams primitives in and splats them to the screen.
So, while most rasterizers are indeed O(n) in the number of geometric primitives, a raytracer or other retained-mode renderer could be "O(n + log(n))", if you count the I/O required to read the scene and build the in-memory representation!
Of course I've conveniently ignored the fact that Renderman and its siblings cannot handle any sort of non-local lighting computations. (true reflections, shadows, ambient illumination, etc). But all of the high-end 3D studios seem to prefer "faking" these effects (with shadow/reflection maps or separate "ambient occlusion" passes) rather than taking the extreme speed penalty of a retained-mode renderer.
(incidentally the same sort of issue came up at Digital Domain's presentation on their voxel renderer - although they started out using volume ray-marching, the render times got so long they switched to voxel splatting, and used various tricks to make up for the lower image quality)
Re:Realtime raytracing is the future (Score:2)
Question about real time PovRAY (Score:2)
What is most annoying is having to specify an exact position and camera angle, wait a few hours, and see that it wasn't quite what I wanted, edit, repeat. A complete generation of all collected scenes takes a couple of months on a dual Athlon.
What I would like is to "move" in real time thru the house plans. Obviously games can do something like that. Freshmeat shows a ton of projects, too many to tell me which are useful and good.
What most seem to lack is a laguage like PovRAY has, where I can move a wall and the stairs code recalculates how many steps before and after the landing for a stairwell. Point and click, drag and drop, neither interests me, because moving a wall this way would be too tedious and prone to error, when the steps adjustment requires moving a door, appliances on the other side, etc.
Any recomendations? Conversion from, or using directly, PovRAY sources would be best. It has to have some kind of specification language, even if I have to write my own PovRAY converter or front end. Fine detail is not necessary. As long as I can move the sun around, and it shows walls and windows and doors, that's good enough. Even one frame a minute would be acceptable, but several frames a second would be wonderful.
Re:Question about real time PovRAY (Score:2)
But this is something to keep in mind. Even if not radiosity, it does seem that it ought to be possible to calculate a lot of the data for fixed lighting position and level, and move around with little recalculation. A good thing to bear in mind, and I thank you.
What realtime can do, brute force can do better. (Score:2, Interesting)
Don't expect to see nVidia- or ATI-rendered imagery on the big screen anytime soon. Yes, graphics cards are advancing in ability and speed. However, this is largely irrelevant to the world of CGI. No matter how good it looks on a monitor, it's not good enough for film.
Yes, I'm sure a Toy Story 2 quality film could be rendered on one of these cards. However, by the time these cards come out, TS2 will be nearly 3 years old.
The vision of the artists themselves is what drives CGI, and no matter how good the real-time solution is, the brute-force computational solution will be better - simply because it doesn't have to be real-time.
Fast graphics cards definitely make animators lives easier. We (medium-sized visual effects studio) recently switched from SGI Octanes to Intellistations with ATI FireGL 8800s running Linux. Being able to tumble fully-shaded medium res characters in real-time is sweet. But, it's not good enough, even for TV. And, by the time a card exists that IS good enough, the bar will be much higher.
This is what Pixar's Photorealistic RenderMan does RIGHT NOW. link [pixar.com]
Now show me that in real-time and I'll marry your ugly Aunt Hilda.
ugh (Score:5, Insightful)
This is a bunch of crap. By the time your playstation 6 or ge-force 7 or whatever the hell it's going to be gets to a point where it can run enough cycles to achieve toy-story quality pictures in real time (which is still years off) the bar will be raised again for cgi.
Just as moores law doubles technology, the technolgy of rendered cgi doubles. Think back when cray supercomputers rendered frames and took about an hour a frame for untextured geometry with little of the properties that are avaliable today. Today, the images still take 1 hour a frame, even though the technology is billions times faster. Why is that? Because cgi artists will continually pump in as much as they can per frame. If it took 20 minutes last year, it's going to take 20 minutes this year because studio x is going to add some new thing that improves quality but still retains their time margin.
Do you honestly think that gpu's are going to be able to achieve real-time radiosity in next couple years? Real time raytracing like renderes have now? Hundreds of thousands of blades of grass with no tricks? Individual hairs? Do you think that will happen anytime soon? Perhaps sometime - but when it does pre rendered images will feature something new that real-time can't match. Face it - real time graphics will never replace the quality of pre-rendered.
Short answer: yes (Score:5, Informative)
I used to think as you do. That was before I got a large amount of education while attending Siggraph this year.
At Siggraph, I saw a live demonstration of a real-time raytracer that was also computing a diffuse interreflection solution (radiosity-like, for those who don't understand) on the fly. I also saw a real-time recursive raytracer written by Stanford researchers that was implemented in a GPU's pixel shader. There is currently research on displacement map "textures" that could conceivably let you render thousands of blades of grass or individual hairs without having to send all of that geometry down the AGP bus.
All of these things blow the doors off what people think a graphics chip can do. Your post would have been accurate last year. Not now.
I will agree with one point: software-based rendering will always be able to compute certain effects that will be difficult or cumbersome to do in a GPU. But I'll also claim that the gap is dramatically shrinking.
I'll also say that the two techniques are not really in conflict. You can use them both in conjunction with each other. You can use a hardware-accelerated Z-buffer to help a raytracer determine first-level occlusion. You can use a raytracer to generate textures and maps for a GPU. In the future, we will see both techniques used to compliment each other.
Re:Short answer: yes (Score:2)
No. That should be able to be done with normal environment mapping (if I'm understanding you correctly).
I meant a lot of the demos that I saw in the "When Will Ray-Tracing Replace Rasterization" panel on Tuesday. The first presenter (Philipp Slusallek from the Universitat des Saarlandes) was showing an app that raytraced a conference room in real time. It would progressively recompute the diffuse interreflection solution whenever you changed the position of the chairs and such. It was also running interactive on the show floor at RackSavers.
The GPU-based raytracer was described in the "Graphics Hardware" talks on Friday. The authors were Tim Purcell and Ian Buck from Stanford.
I do agree with you that we're at the "baby step" stage. These are very small things - very specific to show something special.
But it's been shown that a GPU can compute a general Renderman shader, though it may take hundreds of passes. The advent of the vertex and pixel processors make the gap a lot smaller than most people think. And it's getting smaller every day.
Re:ugh (Score:2)
Ie, it used to be 1 hour per frame. In the future it might be several seconds per frame. But it won't be 25 per second, not until we can do physical modelling on the human body at just a bit higher than the cell level at real time will Hollywood agree that real-time is good enough.
Bryan
As a CG person... (Score:5, Insightful)
While it isn't addressed in the article, there are a LOT of things that need to be handled in the hardware that just isn't there.
A perfect rendering system is almost near-infinite recursive, requires infinitely detailed models, and takes a long time to render. We can't do the infinite perfect system, but we can tell our artists to let it run for about an hour per frame. That means 'no realtime top of the line movies', no matter what.
frob.
These problems are fixable (Score:5, Informative)
You only need all that detail for nearby objects, which is what subdivision surfaces and level of detail processing are for. With procedural shaders and bump mapping, you don't need that much for most surfaces. The detail may be there in the model, but only a small fraction of it needs to go through the graphics pipeline for any given viewpoint. Given that pixel size is finite and human vision has finite resolution, at some point you max out.
For fixed lighting, you can do radiosity in realtime. (Check out Lightscape, now called 3D Viz.) The radiosity map only has to be recomputed when the lights move. Mostly this is used for architectural fly-throughs. Of course, Myst and Riven are basically architectural fly-throughs. (They're rendered with multiple hand-placed lights in Softimage, though; when they were made, the radiosity renderers couldn't handle a scene that big.)
I tend to agree, but at some level of detail, you can render geometry into a texture (maybe with a bump map) and use that until you get really close. Microsoft prototyped a system for doing this automatically a few years back, called Talisman. Talisman was a flop as a graphics card architecture, but as a component of a level of detail system, it has potential.
Moving around in a big virtual world is going to require massive data movement. But we're getting there. This may be the driver that makes 64-bit machines mainstream. Games may need more than 4GB of RAM.
Rendering isn't the problem, anyway. Physics, motion generation, and AI are the next frontiers in realism.
Re:John, shut up (Score:2)
Re:As a CG person... (Score:2)
What scene requires 1000 lights? Rendering spiderman swinging from a Manhattan building and every light through the windows accurately rendered? Please. In the recent
Movies aren't perfect either, see the foam columns in the Matrix lobby. Rending movie quality in real time isn't about rendering a scene perfectly, its about making our eyes think its perfect. So in ten years assuming Texas Instruments doesn't have a gargantuan breakthrough allowing for 10,000 by 18,500 pixel projection, realtime top of the line could be possible.
Re:As a CG person... (Score:2)
Nobody in the entertainment industry wants to render reality. That's because there is nothing at all real about movies.
No scene has 1000 lights, by the way, unless you're generating lights automatically from a HDR light probe image. We're talking on the scale of a hundred or so here, compared with the ten or so which modern video cards can handle.
alking on the scale of a hundred or so here, compared with the ten or so which modern video cards can handle.BTW, a hundred or so lights is not uncommon on a big traditional movie set, either.
Actually, Monsters Inc ran at some
Actually, Monsters Inc ran at somewhat over one polygon per pixel. PRMan dices curved primitives down to one polygon per pixel as part of the rendering process. The figure is configurable on a per-object basis, and it's automatically less for highly blurred objects (e.g. objects which are out of focus or moving), but this is made up for by culling around silhouette edges rendering more layers.
Of course, more geometry than this actually goes into the pipeline. About 2Gb gzipped is a typical size, according to Tom Duff; more if there's a lot of LOD (e.g. the room full of doors).
...as a Real Time CG person. (Score:2)
Oh, and just to have a dig, a good real-time CG artist can generally achieve the same effect that a normal CG artist can, with an order of magnitude fewer resources (poly's, texels, lights), precisely because they have to. It's good to have to work within constraints. Stops you getting lazy...;)
Re:As a CG person... (Score:2)
God no. Not in a high-end renderer. That's what proper filtering is for.
This might just... (Score:2)
Y'all are missing the point here (Score:5, Interesting)
I'm going to simplify a great deal here to try and boil this down to the essence. John Carmack please feel free to correct any mistakes I make.
Up to this point, the imagery coming out of the gaming graphics cards has been limited by the hardware design of the cards. The feature set implemented by the cards limits how complicated you can get with the details in the final image.
Note that we're not just talking about simple things like pure polygon counts. Film Industry CGI isn't of higher quality just because they throw more polygons at the problem; they have all kinds of highly complex shaders that can generate special textures without changing the number of polygons in the model - if you saw the "special features" on the Shrek DVD, you can see this at work with Donkey's fur.
Rendering all these extra shaders is CPU expensive, which is why the big animation houses have big render farms.
But two things have happened that stand to change that.
The first (and the most ingenious) is that it has been discovered that you can compile any shader into a series of OpenGL language commands. The tradeoff is the number of passes through the pipeline that implementing a given shader may require may well be a large number - but even so, any shader currently in use by a Hollywood Mouse House can, in theory, be compiled into OpenGL and executed on any OpenGL card.
And here's the really cool part - rendering in OpenGL is many times faster than doing it in software on a general-purpose CPU. Many, many times faster.
Secondly, the biggest problem with trying to crank Shrek through your GF2MX400 (assuming you've compiled all the shaders into OpenGL) is that each shader may require 200 passes, but the data structures inside the card lack precision - either not enough bits as a float, or perhaps not even floats at all, but integers.
That means the data is being savaged by rounding errors and lack of precision during each render pass. It's like photocopying photocopies.
BUT, the latest generation of graphics chips have the necessary precision to do 200-pass rendering without falling victim to rounding errors.
Combine these two things together, and you can quite literally take a frame from Shrek, with all the crazy shaders, compile it to OpenGL, and render the frame on your GF6-whatever **faster than the native render platform**
A very good deal faster than the native render platform.
Is this "Shrek in real-time"? No, not by a long shot. But it may well be "Pixar's renderfarm in a box".
Now, as Bruce pointed out, having Pixar's renderfarm in a box doesn't make you Pixar. There is still a requirement for artistic talent. But all that cheap extra horsepower may well mean that the quality of CGI is going to explode for those talented enough to make use of it.
How will this affect games? It makes a bunch of shader techniques that were previously availible only to the movie industry possible within the framework of a game. And it divorces, somewhat, the game visuals from the card's hardware because these shaders are executed as general-purpose OpenGL instructions, not as dedicated hardware on the card. If you, as a game designer, can write a "fur shader" that runs in few enough passes to meet real-time output timings, then you get fur on your model, even if the card doesn't have a built-in "fur shader" or "fur engine".
THIS is why this is all such a big deal. The amount of quality per mSec of render time is about to explode.
Cool stuff!
DG
Re:Y'all are missing the point here (Score:2)
NVidia's Cg Compiler allows realtime rendering (Score:2, Informative)
See http://developer.nvidia.com/view.asp?PAGE=cg_main [nvidia.com] and www.cgshaders.org [cgshaders.org].
Sack the sigs.
high-level shading languages (Score:3, Informative)
I'VE GOT AN IDEA!!!! (Score:2, Funny)
PPP
Things will change (Score:2)
---
Irony (Score:2)
So be careful what you root for.
--LP, who imagines the truth is somewhere in between
Real-time ray-tracing (Score:2, Informative)
This 3D package has it:
http://www.equinox3d.com
The first two scenes on he following page run around 4-30 frames per second, fully ray-traced (multiple reflections, refractions etc.) at low resolutions (~160x120) on an Athlon 2000XP.
http://www.equinox3d.com/News.html
The renderer will be released in a couple of weeks.
The program runs on Linux and it's free (shareware).
The author.
PROBLEM (Score:2, Insightful)
conflict: user posts a message pointing out that the Editors are asleep at the wheel.
resolution: WHAP! WHAP! Score: 0, Offtopic.
the quality of this blog gets lower and lower. does anyone disagree?
Mod this up (Score:2)
Re:An intereting switch (Score:2)
They already have been, for several years now (otherwise why even bother?)
But we're only talking about certain types of operations, which obviously run much faster on specialized processors rather than general-purpose cpu's. But if you use the number of transistors as a basis of comparison, well the R300 has twice as many transistors as a Pentium-4 chip.
When will we have alpha channel effects in linux? (Score:2)
Graphics cards are fine but when will linux actually use the power of these cards? I want to see this on my interface. I want the KDE team to do it and I'm willing to pay money.
Re:When will we have alpha channel effects in linu (Score:2)
Re:shaders are cool and everything.. (Score:2)
I'd still be impressed by a 'movie' done with such a render.
Re:shaders are cool and everything.. (Score:2)
>>light. Which is oftenly used in movies etc. for
>>über precise ligting effects.
Nonsense. 95% of movies and movie effects are rendered with PRman which doesn't do raytracing at all.
Re:The future is all around you... (Score:2)
Re:New Releases Coming Soon to a Computer Near You (Score:2)
Go read Neil Stephenson's The Diamond Age. Then go read Snow Crash. But for this, The Diamond Age.
Actors get nanotechology thingies called nanosites implanted into their faces and bodies, which act like vertexes, turning them into giant wireframes. The appropriate appearence is then mapped to the 'sites, and the actors then appear, to people watching through their mediatrons, to be whoever they're supposed to look like. The concentration of 'sites in an area allows for more fine tuned mapping; tons around the eyes and lips, a few around the arms and legs.
Re:New Releases Coming Soon to a Computer Near You (Score:2)
go to siggraph sometime, stephenson isn't as far out there as you think
Re:Movie-class CG? Yeah, right (Score:2)
>>their machine clusters take on the order of
>>minutes per frame to render.
Wrong and wrong. Most movies are NOT raytraced. Most movies and movie effects are rendered with PRMan (probalby >90%) and PRMan most certainly does not raytrace.
'minutes per frame'. The guys running the renderfarms WISHED. ToyStoryII had frames up to 80 HOURS per frame. The only people doing minutes per frame are doing low quality, low res stuff for weekly TV.
Re:Movie-class CG? Yeah, right (Score:2)
>>give or take, depending how you alter my
>>assumptions. But 80 HOURS per frame is so absurd!
Life is absurd
I didn't say 80 hours for ALL frames, but they did peak at 80 hours on some frames. Get the ToyStoryII DVD and listen to the directors commentary track. They state it quite clearly.
Re:Movie-class CG? Yeah, right (Score:2)
And? Thats an artistic decision by Pixar. The same software that renders the 'cartoons' like ToyStory and Monsters Inc is also doing the goblins in Lord of the Rings, the dragons in Reign of Fire, Jurassic Park I/II/III, Perl Harbor, Men in Black, etc etc etc. And probably every other effects movie you've seen in the last 15 years. I can go on all day. Pixar is deliberately going for a non-realistic look.
Re:Movie-class CG? Yeah, right (Score:2)
>>you're not going to actually take advantage of >>it by making it more realistic?!
Why? Whats the point in trying to produce an exact copy of reality? If thats what they wanted they could just go out and shoot some REAL plates with a real camera and it would be cheaper and faster than all this CGI nonsense....
I guess you think Monet and Van Gough were terrible painters because they weren't 'realistic'?