Revolution in Graphics? 164
wilton writes "A technology genius has Silicon Valley drooling - by doing things the natural way, writes Douglas Rushkoff in today's Gaurdian.
This project has been going on for a couple of years now, they have demo's for Windows and Be.
The ideas is to not use rendering and polygons to create scenes. Instead build it up from a molecular level, with apparently amazing results. "
What a shitty article (Score:3)
Secondnly, and more annoyingly it is still entierly digital...stupid reporters
Try the real thing (Score:2)
try the original nervana site [nervana.com],
you can even get the software there, for Windows, Mac and Be. The screenshots
don't look bad, but I have my doubts as to what this
all is all about, but hey, I'm a simple biologist.
It looks to me like a hoax, or at least like
someone trying to get into news - you don't find
a source code or any details. Could some good soul,
fit in comp sciences, explain to me what so interesting about this project?
Demo of this (Score:1)
http://www.nervana.com/psi/
(Winoze, Be and Mac)
It looks quite good, but very basic. Hard to tell really if it's going to be revolutionary or not.
This is new? (Score:1)
I've seen better looking graphics in 5 year old euro-demos running on a 386 DOS box.
My god! (Score:1)
no I'm not afiliated with this guy/company I'm just real impressed.
Hype? or someting more? (Score:2)
Typo (Score:1)
Though this may be intentional - the Guardian used to be notorious for its typographical errors (it's often referred to as "The Grauniad")
To get back on topic, their knowledge of tech matters is less than stellar (though not as bad as, say, the Sunday Times) so they may have just fallen for some marketing hype. I can scarcely imagine any Nintendo execs being stunned at that little demo, impressive though it is for 74k. (Though I've seen better on 4K Amiga intros)
What it really is: (Score:1)
It's just a landscape engine, nothing more. 3D-games contain a lot more stuff than just hills and lakes. It's still digital (doh). Low CPU requiremnts? Please, the demo (400x240 I believe) drains 90% of my CPU-resources - and it still looks like *hit. My CPU is not Z80 (which the article claims to be sufficient), but a K6-2 running at 400MHz. I rememer seeing cooler landscapes in demos, and with lot lover CPU requirements.
I'm not saying that this technology isn't worth anything (how could I, I first heard of it 15 mins ago). It still looks like it needs a lot of develoment, though.
Interesting indeed- but too early to tell (Score:2)
Revolutionary like a fox (Score:2)
Apparently the people who wrote this article aren't up with modern times, or something. If you check out some of the 40kB demos for the Amiga you'd see more impressive stuff, and those don't require a 100MHz machine (stated requirements for the Win32 demo on the nervana.com site). As far as using mathematical equations to approximate the real world "better" than polygons, this has been used before. An example would be the procedural textures used in Lightwave 3D, which uses a bunch of algorithms to simulate different real world textures, using only a fraction of the memory of their bitmapped counterparts.
Of course, I could be completely wrong about these demos, but IMHO this is hardly revolutionary, or even the slightest bit impressive. I would expect that if this were truly something more than hype, there would be more substantial information at their website, or at least a demo which is actually impressive.
What would be nice to see is 3D accelerators with support for some procedural textures on the card, and then having those features actually used in the game. You can achieve some very impressive effects with such things, although I suppose most people see it as easier to just add more memory to the card nowadays. I remember reading one card that was due to be released a few months ago was supposed to have such features (Permedia 3?), but I'm not sure.
Uses iterated functions on graphics card? (Score:1)
Use of IFS in computer graphics can hardly be regardes as news, but I guess putting it in hardware, making it possible to draw directly in video RAM, would make the technology usable also for games. This way you don't need that many polygons to draw an intricate structure.
Lars
Lars
--
Whats so special about that (Score:1)
If you still think its good you should check out this [eyeone.no], a 50 something K voxel engine in java (warning Explorer 4 only).
"The future is already here,
it's just not evenly distributed yet"
Where are the creatures? (Score:1)
Re:Interesting indeed- but too early to tell (Score:1)
It would have been nice if they had provided some documentation, but it seems that where on the screen you click controls your movement, no dragging involved(the closer to left/right edge, the faster you turn, closer to top/bottom, the faster you move).
> And I imagine that really beautiful, fractal and super-complex at any distance of viewing images that this > tech could create wouldn't stay consistent as you move around
Yah, while up close the boundries between sections looks really good, but if you move about 10 seconds away from the island, the water starts forming some really wacky moire and the island gets all blocky.
If it weren't for that and the way the water "laps" this would be fun to play with. That is, if it actually did something.
Try 4k intros (Score:1)
I suggest you check out some of the 4k intros on
ftp.scene.org. They've got 3d AND sound too. In 4096 bytes.
Constructive Solid Geometry (Score:5)
Constructive Solid Geometry (as used in POV etc.) is also an alternative to polygon-based rendering.
For those that don't know about it, with CSG the scene is built up from primitive blocks (e.g. cones, spheres, cubes, rods, etc). More complex objects are made by using boolean operations (AND, OR and DIFF) on the primitives. For example, a ring can be made by subtracting (DIFF) a rod from the centre of a sphere. Solid textures can be applied to the resulting objects, and raytracing can be used to produce shadows, reflections, transparency, etc.
Unfortunately, CSG and raytracing seem to have been overlooked by the graphics card manufacturers. The new effects proposed by 3dfx (motion blur, soft shadows, etc.) can be achieved very simply using stochastic raytracing. Raytracing has a reputation for being very processing-intensive, but I am convinced that it could be done efficiently in hardware, and the quality of the graphics would be far greater than polygon rendering.
In relation to the article - the Psi technique looks interesting, but seems to have very limited scope for application. IMHO, graphics card manufacturers should look at raytracing and CSG instead.
/end{rant}
Re:My god! (Score:2)
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:What it really is: (Score:1)
Looks and Sounds like the old 'fractal' engines.. (Score:1)
the journalist got it wrong (Score:1)
Why this is cool (Score:3)
Of course polygons look prettier.. look at the current difference between painted pictures and polygon graphics. With a painting the artist is simply putting the colors on a flat surface in such a way that it simulates reality. Relatively easy to do since you just need to put colors there in a suggestive way (i cant do it myself for beans, but you get the point).
Now a graphics program, you create a 3d object out of polygons, then place texture images over them. This is more difficult because you have to create the actual 3d object.. like sculpting.. you cant just suggest 3d with shadow, you have to Make 3d and let the light create the shadow naturally. The textures arent really roughness or shininess, just images that Look rough or shiny and make any light sources react the way they probably would.. this saves memory by making the shape Look more complex than it really is. A smooth cylinder might look just like a tree trunk because it has a rough-appearing texture. But it's not really a tree. If you get too close you get flattening of the texture.. especially in realtime engines for games because it cant raytrace fast enough with modern computers, so uses simple rendering. It can look really, really good.. but can also look REALLY bad.
Now, i may have misunderstood the article and webpage for this technology, but what i got out of it is that this uses something like a fractal generation system, using a formulae and number of iterations, to generate real objects. Not just a mesh of points some of which have polygons drawn between them, but something closer to a physical reality. Like a fractal, it would look fine up close or far away, and like a fractal because it's based on iterating a simple algorithm over and over it would just be a matter of doing math rather than crunching z-buffer coordinates into 2d images like we do in polygon rendering engines.
What's really important here is the oppertunity for data transfer. All those cyberpunk novels make use of the ubiquitous virtual worlds where people and environments are rendered seemlessly, usually using small computers, in realtime, with wireless modem links. So far this has been no more than a dream because no personal computer could hope to handle that kind of load, No computer can raytrace in realtime with a complex scene, and there'd be no way to send that much data with anything like current modems. This technology doesnt make this all come true in a flash, but it does improve the chances immensely. You can simply transfer location data and a formula rather than mesh coordinates and transforms.. much, much less data. You dont need to do the kind of heavy number crunching for raytracing because of the way the objects are generated, and you dont have to worry about things like textures because you can just make the actual object bumpy, smooth, jagged, whatever.
Now the biggest complaint is obviously that it doesnt compare to modern polygon graphics. There's a simple reason for this.. it's not a highly funded, industrially motivated, relatively old technology. It's fairly new and being developed by a few guys. You cant expect miracles overnight.. but what he's got looks pretty good considering how new it is. You all talk about how wonderful demos look with current tech.. sure they do.. that's what theyre for. This demo is to demonstrate that his technology Does work. If you had a time machine that could send a penny 5 minutes into the future, would you complain because it didnt look cool?
Anyway, it's obviously no sure thing, but it does have a good deal of promise, and polygons cant last forever. Personally i think realtime rendered 3d games look like crap. Raytraced scenes can look very nice, but all too often suffer from virtual unreality (that plasticy look everything tends to take on.. obvious fractalism in complex objects etc).. This or something like it that builds up from basic principles into a complex object will eventually be needed.. just think about human interaction in a virtual environment, you cant very well create polygon meshes for every possability.. what if you broke a chair, how does it generate the broken ends and interior wood grain? If you bite into a cookie, how would you go about creating realistic crumbs in realtime?
Dreamweaver
Re:My god! (Score:1)
Not ony that, notice the waves lapping on the shore... they may not be mathmatically perfect but they look "right" (albeit very fast and crude) to me.
If I'm wrong (I have been before
Best Regards,
Greg
or try 64k intros from 1995 (Score:1)
Re:Why this is not so cool (Score:2)
Remember MARS.EXE? (Score:1)
Read the original usenet posting here [ed.ac.uk].
Digtal, analogue and simulating the real world. (Score:1)
probably comes under the "nice idea that I don't understand heading".
CDROM Game (Score:1)
__________________________________________
Re:Constructive Solid Geometry (Score:3)
The most common alternative to polygon-based graphics is ray-tracing. This is a more pure form of sample-based graphics (i.e., I ask what the sample value is at a particular point rather than pushing pixels towards the end of the pipeline). My distinctions are slightly muddy here b/c I prefer to draw the demarcation lines based on illumination rather than model types.
It is trivial to do CSGs in a ray-tracer (and many related techniques). It is extremely difficult in a polygon-based system; but it is not impossible.
And he even have stolen psi's handle ;) (Score:1)
Constructive Solid Geometry On Current Cards (Score:1)
The problem I see with CSG is features like racetracks and landscapes. They don't really suit a CSG representation. Another thing to remember is all the other doohikeys that have to deal with your geometry representatione (e.g physics and AI).
The move to polygons has seen the death of many kewl little tricks that existed when people could just plot pixels. More stuff may dissapear id on-card geometry acceleration takes off. CSG would admittedly be interesting for this sort of thing, but a card whose hardware acceleration was based on CSG would make many operations that are simple today a bit of a bitch.
Am I missing something here? (Score:1)
So why does their demo require Minimum of 100Mhz.
It's not hard to see how underwhelming this project is.
The reporter who wrote the Article about this appears to have swallowed someones marketing hype hook-line-and-sinker. He hasn't even done the usual journalist work of go to an analyst and get some sound bites (text bites?).
The best case scenario for Nervana is that they have been mis-represented by someone, maybe the writer of the article.
They might have a good model for terrain representation but that hardly constitutes a revolution in graphics. You still have to do everything else.
Using this as a base for graphics would be like the old days on the amiga when someone comes up with a neat video trick and tries to make a game based around it. Inevitably the result is a contrived usually crap (mostly never to be released) game.
This stuff could possibly work as an opengl extention, but only if it can be implemented in hardware.
Re:My god! (Score:2)
seems like the demoscene just invaded slashdot ;)
I will just give you links you should follow if you want to see impressive quick, small 3d rendering code:
I have more links on my page, see the DemosSelection link.
terragen (Score:1)
Signal/Noise=0 (Score:5)
I don't know exactly what is being refereed to here, but many alternatives to polygon rendering have been around for ages. Simulation of the light reflection/refraction at the molecular level has been an ongoing area of research in the graphics community. The problem is as you get closer to real-life, exponentially more processing power is required. We can only hope for better and better approximation methods. Further, the fundamental laws of physics governing lights at the quantum level are not fully understood.
I'm highly skeptical that a 22 year is doing any work in this area. This work had very little application in the real-time graphics community, why should Nintendo be interested?
Perhaps they are referring to voxel rendering, which can be done in realtime and a more likely project for a 22 year old to undertake (who hasn't?) A large problem with voxels is the amount of memory required, so either the shapes must be generated on the fly procedurally or it must be compressed using curves/wavelets or a combination of both. The article mentions "parabolas and ellipses," so this might be what is being talked about? Voxels are in no way a representation of something "on a molecular level."
I'm impressed the reporter managed to write such a long article without saying anything.
What about NURBS? (Score:3)
Definition:
NURBS, Non-Uniform Rational B-Splines, are mathematical representations of 3-D geometry that can accurately describe any shape from a simple 2-D line, circle, arc, or curve to the most complex 3-D organic free-form surface or solid. Because of their flexibility and accuracy, NURBS models can be used in any process from illustration and animation to manufacturing.
I know the use of NURBS are really easy and flexible as they are simply splines which can be ajusted by certain control points and different wieghting. They have easily replaced charachter modelling from polygons in the past 2 years.
I remeber speculation on hardware which could render/raytrace NURBS and other spline based modelling, directly w/o conversion to polys. However i've yet to see it materiealize.
Some of the Better NURBS modeller's avalible are:
Maya [sgi.com] A linux port of this is supposedly floating around SGI and some of the larger software houses.
Rhino3D [rhino3d.com] Shame its windows only, yet there's some reports of it successful in Wine.
Enjoy, Oblisk
------------------------------------
If the article is correkt... (Score:1)
From the article:
"He just passed through Silicon Valley last week demonstrating his homemade graphics engine, and everyone from the designers at Nintendo to programmers at Apple has been left in shock."
They sound pretty impressed to me. If this is a real breakthrough - then NVIDIA, 3dfx etc. may be in big trouble...
The question is - if this is so great - when is it going to be available/usefull - later this year or in ten years?
Insufficient Information (Score:1)
Re:Constructive Solid Geometry (Score:2)
It has allready been made for professionals:
http://www.art-render.com/products/rdrive.html
Unimpressed. (Score:2)
I feel almost the same when my three year old daughter brings me a picture she drew. "oh, sweetheart, that's beautiful! I love you! Now go stick that to the refrigerator with the other ones."
Did anyone else find it suspicious that the most recent "interview" was from September of last year? About a remarkably stupid sounding game? And the "demo" was billed as a way to view the island of nervana, even though it only vaguely resembled an island? The guy is 22 and designed the game, the graphics engine, and wrote the music for the game?
The best I can hope for here is that the reporter was a friend of his trying to help him out, or maybe just really, really, hard up for a story.
Fractals.... (Score:1)
Doesn't sound particularly revolutionary....
Re:Constructive Solid Geometry (Score:1)
However, CSG is IMO the easiest model for raytracing, as polygons are for rendering.
Of course it's a hoax (Score:1)
Re:Constructive Solid Geometry (Score:2)
A snip at $20,000!
Incidentally, 3dNOW and SSI are entirely suitable for raytracing. Since they were released I've had the dream of writing a realtime raytracer.
\begin{shameless plug}
If anyone is interested, my first (suboptimal) attempt at producing a 3dnow raytracer is available at:
ftp://ftp.dcs.ed.ac.uk/pub/cdw/ray/3dray.tar.gz
Unfortunately I can't do any more until I get an
AMD or P3 box. Feel free to copy/modify this code. Just let me know of anything you do.
\end{shameless plug}
Silicon Valley Drooling? no, laughing? Maybe... (Score:1)
the engine. This has been produced in part by the mathematics required to describe the landscape in real time,
meaning that the colouring remains relatively fundamental." So it HAS to look this crappy? Pretty sad. I have a name for it though, he could call it Retarded-PreSchooler-Vision... Pretty catchy isn't it?
Re: (Score:1)
works on ie 5 also (Score:1)
-----
Re:What a shitty article (Score:1)
"Another idiot certainly claiming to be an 'IT professional' writes a pointless article full of cliches, maybe trying to reproduce the style of Wired magazine, father of all the hype in this world" writes Stephan Tual.
Oh boy I'm not in a good mood today
3dnow Source Code (raytracer) (Score:1)
Regarding the 3dnow source code. It was written for a K6-2 300Mhz machine that I no longer have.
I have only compiled it with gcc (requires a recent version of binutils for the 3dnow part)
under Linux. It doesn't currently do CSG either,
only a fixed scene containing spheres. Unfortunately the 3dnow part doesn't actually give any speed improvement at the moment! I suspect that the femms overhead is too big - suggestions welcomed. I'd be interested to see how this runs on an Athlon.
I agree (Score:3)
"But this could be promising!..."
True. I'll believe when I see promising artworks. This reporter obviously got carried away; I'm in computer games and I'm just not impressed.
"But Bryce isn't realtime!"
True enough as well. Bryce is a raytracer; it takes a long time to render. Oooooooooooh.....I wish I could talk to you about this
I do disagree with you on one point: No reason a 22 year old can't do this. Everyone in the basement is 19 or younger.
Re:What about NURBS? (Score:1)
Remember the adage "Triangles are the pixels of 3d". Until we see bezier surfaces and NURBS as primitives, we probably won't see hardware for another 5 to 10 years.
The problem with NURBS is that they are slow. In contrast to tossing a few more textured tris at the hardware, since thats what the hardware is optimized for.
remove this article (Score:1)
The idiots are getting way too much attention again.
Um. Critical thinking suggested. (Score:4)
He decided to use the Nintendo GameBoy as a standard for how much computing power a machine should have...and developed a series of simple equations that can be used to generate waves, textures, and shapes.
Does anyone here know why polys, especially triangles, are the basis of most modern graphics systems? No? I'll tell you: it's because they're *EASY TO DRAW*. The equations are as simple as you can get; almost everything becomes linear interpolation and therefore only needs a single addition per pixel line. Waves are likely to need some sort of transcendental function (such as sine or cosine) to function properly -- something that requires either a massive hardcoded table, or a LOT of CPU time. Not to mention the need to toss either fixed-point or floating-point numbers around. GameBoys are 8-bit, aren't they? That doesn't give you much precision.
Remember how you used to draw parabolas and ellipses in maths class?
Um. There are three possibilities for drawing these:
-> Use the equation directly. This involves a square root. Square roots are slow.
-> For the ellipse, you can generate it using sines and cosines with a parameterized equation. The resolution on the parameter will determine how choppy the outside looks; even a resolution of 1 degree took a while on my TI-85 back in high school
-> Iterate over the ENTIRE DISPLAY, applying the generic conic equation to each point; use this to find boundaries. Incredibly tricky, requires a square or two for each pixel, and is generally going to be a pain. (for the ellipses this is a little simpler, since you can bound it by the major and minor axes)
Each element of such a display will require much more computation that a polygon; you could save a few polys this way, but I don't see it being the sort of revolutionary jump they describe.
The article then goes on to state some fluff about plants and carbon atoms, claiming that quantum equations are 'simple' (I wish!) and suggesting that "Barbalet"'s stuff is built "from the ground up, just like nature does it." This isn't true, even if what they said is true, and has nothing to do with molecules and plants; he would be building his images up from shapes -- different shapes than are standard now, perhaps, but still just shapes. No image built "from the ground up, like nature does it," requiring the transmission of every molecule, is going to even be manageable by modern computers, let alone result in stuff that can be transmitted over modems and wires more easily than graphics images.
The more charitable explanation is that this is a highly confused journalist who has run into ellipsoid 3D graphics or something similar and thinks it's cool.
Daniel
Re:Constructive Solid Geometry On Current Cards (Score:1)
If you have a 'plane' primitive, you can use that as a base and just build on top of it using AND operations.
In my original post, I should have distinguished between CSG and raytracing. It is perfectly possible to raytrace a polygon model if CSG
is too cumbersome.
Re:Am I missing something here? (Score:1)
So why does their demo require Minimum of 100Mhz.
Hmm, I have a 200mhz, 64meg RAM machine here at work, I'm running Outlook, AIM, IE 5, Winzip, this guys graphicy thing, the resource meter, Mcafee Virus Shield, Getright, and I'm streaming a radio station from england, I still have plenty of processor left over... I dunno why you guys are complaining, this thing takes up almost nothing.
Kintanon
Re:In response to stuff like this... (Score:1)
Yes, I have to agree with you there. But until I realised that, I was encouraged by the following bit in the Guardian piece...
Silly me, I thought this sounded a little like collaborative development. Sort of like real free software. Geez, I can be naive sometimes!
Sometimes ray tracing is the fastest algorithm (Score:3)
Not just in hardware. Ray tracing was used in John Carmack's Wolfenstein - a classic example of how ray tracing can outperform traditional polygon rendering. In Wolfenstien the simplifying assumption is that just one ray needs to be traced per column of pixels in the viewport. It obviously works, for the special-case scenes that Wolfenstien used. The ideas were generalized somewhat in Doom, to allow for ceilings and floors. Raytracing was abandoned in Quake, in favor of traditional polygon rendering, coupled with a kick-ass culling algorithm. But don't think that raytracing is out of the picture yet - hehe, pardon the pun.
Re:I agree (Score:2)
I'm still not there myself, but Glassner's 2 volume series "Principles of Digital Image Synthesis" has a good intro to the subject of light and energy transport.
I used to run the procedural texture mailing list a few years ago and we had Ken and Ebert on there. Ebert is still working on this stuff, he usually has something at siggraph, but Ken has moved on to other areas of interest. I just had a sweet offer to work on a real-time procedural texturing system for an upcoming game console, so I'm thinking about getting back into that.
Re:Sometimes ray tracing is the fastest algorithm (Score:1)
What ray-tracing does it take a light source, and see where the light reflects (giving color).
Ray-casting on the other hand takes your field of view, and sees what it hits, i.e. what you would see. It limits itself by not allowing for reflections, etc.
Re:Signal/Noise=0 (Score:1)
Re:remove this article (Score:1)
Re:Um. Critical thinking suggested. (Score:1)
Now, this guy is doing voxels, which while it can be faster (since it's so limited), it can't be done on a gameboy...hell, the SNES couldn't do a voxel routine well w/o the help of the trusty SuperFX chip.
Re:Signal/Noise=0 (Score:1)
Put them all together and what do you have? (Score:1)
Just a thought.
Anyone know a reason why it can't be done...?
No Hoax, sorry! (Score:2)
This Kind of thing makes me mad... (Score:1)
i was SO excited! just think, super-light motors! think about the structural applications!
damn thing was a f@ckin' CREDIT CARD....
Re:Sometimes ray tracing is the fastest algorithm (Score:3)
Now there is such a thing as tracing rays from the lights, but nowadays that is typically referred to as "backwards raytracing" which is confusing because physically speaking that's forwards. So some confusion is understandable.
But techniques that use this backwards raytracing typically just do a pass with backwards tracing to deposit light in the scene, and then actually do the rendering with a more conventional raytracing pass (from the eye). Arvo was the first to use this technique I believe, in his "shadow maps" [Arvo, James: "Backward Ray Tracing" ACM Siggraph Course Notes 1986]. Jensen's photon maps are a more refined version of similar technology [paper here [mit.edu]].
Didn't Mico$oft.... (Score:1)
~afniv
"Man könnte froh sein, wenn die Luft so rein wäre wie das Bier"
Re:Sometimes ray tracing is the fastest algorithm (Score:2)
Note sure if this is the same as what you're talking about, but what I used to do back in my "graphics days" was: I'de trace rays outward from the eye. As they hit reflective or refractive surfaces, I'de recursively trace from there. But if they hit an opaque surface, I would trace rays from that point to each light source. If the ray was unobstructed, then the point was illuminated, if it was blocked, then it was in shadow.
Heh, I also faked the diffractive fuzzy edges of shadows by implementing each light source as a cluster of small light sources that were very close together, but at distintively different points. That worked great. :-) Geez, really getting off-topic here, sorry.
---
Re:Try the real thing (Score:2)
An analogue synth has oscillators which produce simple tones such sine, triangle and square waves. By combining them using simple circuits such as ring modulators, envelope generators and resonant filters, it produces complex sounds. You only need a tiny amount of information to describe all the settings on an analogue synth (a "patch").
Early digital synths used wavetables (or samples) to produce complex sounds. This makes it easier to reproduce the sound of a real instrument (you just sample it and then play it back), but the patches are much larger.
(To complicate matters, many digital synths now emulate analogue synths using software models.)
Polygon-based graphics are similar to wavetable synthesis - you use a table of points to reconstruct a surface by drawing straight lines (or curves, if you have the processing power to spare) between each point and the next. 3D worlds created in this way require a lot of memory to store, or a lot of bandwidth to transmit.
Speculative part:
Psi seems to use combinations of simple waveforms to generate 3D worlds. I imagine this would generate random rolling terrain very nicely, but it would be hard to design a landscape "to order". I suppose you would design it using conventional 3D software, and then use Fourier analysis to extract the fundamental waveforms from the complex surfaces. Then you just send (or store) those waveforms, and the rendering engine has the much easier job of just recombining them.
I wonder if this technology could be used to create a new generation of samplers which would sample a sound, take it apart using Fourier analysis or whatever, and work out how to reconstruct it using simple waveforms? That would be very useful for increasing the capacity of samplers (and audio CDs, portable digital music players etc.).
Mars Terrain Demo Explained (Score:1)
I'd like to see a version of this under Linux.
Re:My god! (Score:1)
For linux demos, another site to check out is:
http://linux.scene.org [scene.org]
There's some 4K demos there, some quite nice.
Whatever... (Score:1)
this is new? (Score:1)
Laugh (Score:1)
Re:Um. Critical thinking suggested. (Score:1)
Roger
ever heard of the demo scene? (Score:3)
Re:Try the real thing (Score:1)
Yawn... (Score:1)
And what is with this analog crap? Extremely uninformed mediot, methinks...
Re:Whats so special about that (Score:1)
Its evil Bills personal Java thats supposed to work only in IE..
I wont obey though, and wont even try to run it, ignoring Bills feeble attempts trying to make me run IE..
Re:Signal/Noise=0 (Score:1)
Re:Signal/Noise=0 (Score:1)
Old: IFS and Procedural shaders (Score:1)
Also, Pixar's Renderman uses procedural shaders, where the appearance is calculated dynamically, instead of using pre-generated textures (though that is also an option).
Instead of scanning in a wooden surface and tiling that image, Renderman can use a shader which generates a wooden surface; for wood that looks different, adjust the shader's code or pass in different parameters.
Doesn't current 3d uses many simple eq? (Score:1)
URL for hidden Nervana docs (Score:1)
I haven't looked at them yet but I also tracked down an older demo at Info-Mac that includes algorithm info for an earlier wireframe version. (Skip the useless 2Mb mov promo.)
The Mac demo of the new version downloads as 14k but unstuffs to 4Mb (!), and it includes an interactive terrain generator, which impresses me.
His writings explain that he started this as (what I call) a 'philosophy lab' where modelling 'mind' based on Bertrand Russell's ideas (!?????) was his main goal.
The Info-Mac demo is wireframe but includes a bunch of little monkey-dots whose eyes you can look thru, zipping around too fast and too tiny-ly for me to have fun with (yet).
Re:ever heard of the demo scene? (Score:1)
(Score:1)
And it was all written by 18-19 year olds.
Re:No Hoax, sorry! (Score:1)
(The rest of this is just spinning.)
Or maybe the object could describe itself when you told it what angle/distance you were looking from, and what the ambient illumination was. (This would involve some kind of complex illumination object, but postulate that.) This might save a lot of work with ray-tracing and though it frequently wouldn't be as good, it would also quite frequently be "good enough". Sometimes ray tracing requires more than can be done, so you need to chop it off short anyway. The trick with this one would be how to pass the "ambient illumination" object around.
Re:Voxel rendering (Score:1)
Fractal compression (Score:1)
Re:or try 64k intros from 1995 (Score:1)
Also, I believe www.hornet.org can hook you up with some demos.
Re:Signal/Noise=0 (Score:1)
Does this matter to computer graphics? Not really... because computation power, not lack of an accurate model, is the limiting factor for all but the simplest simulations.
fractals (Score:1)
10-year-old news, (Score:2)
If there is any innovation in this article at all, it's moving this technology down to the level of a video game from the non-real-time applications where it's been used for decades.
Bruce
Fractal graphics aren't a new thing. (Score:1)
As for saying that the polygon based stuff is always ugly... what about Final Fantasy 8? That runs on a mere Play Station, and it's one long celebration of eye candy!
I don't dipute that fractles can make some very pretty graphics (especially of various plants), but they just aren't as simple to proccess as polygons. I really don't see how you can come out ahead in terms of CPU use. Drawing a line requires only a compare and either one or two increments per pixel (using Bresenham's algorithm). There just isn't a way to mathematically generate fractals that cheaply. Maybe that's why after all of the fractal hype in the early 90's, game designers went to texture mapped polygons anyways.
Re:Signal/Noise=0 (Score:3)
You may mean the classical theory of gravity. Noone has a clue yet as to how to get a decent (renormalizable) quantum gravity theory, short of using strings. And we're still a looong way from getting numbers out of strings, even though it looks very promising.
OTOH, QED has been tested to insane experimental accuracy, is known to be renormalizable (to arbitrary order) and since perturbation theory converges fast (alpha=1/137), we actually can compute pretty much anything that can be experimentally tested.
So I think it's safe to say that as far as the fundamental physics of light-matter interaction is concerned, we have a very good grasp of what's going on. Which doesn't mean we have a theory of everything: strong interactions are still poorly understood (perturbation theory doesn't work well in general and non-perturbative calculations are crazily hard), quantum gravity is still on the run, and strings as of yet contain many unresolved problems.
Does this matter to computer graphics? Not really... because computation power, not lack of an accurate model, is the limiting factor for all but the simplest simulations.
Actually, I can't think of any day to day computer graphics application where quantum effects matter anyway: classical electromagnetism gives you everything you need to compute light transmission/reflection/diffraction effects for any rendering you want. So good or bad understanding of light/matter interactions at the quantum level is just irrelevant as far as CG's goes.
Re:Signal/Noise=0 (Score:2)
Well, as a proof of concept, it is definately worth showing that this can be done on even a relatively slow processor.
I would argue that instead of demonstrating how efficient this is by how fast it can display simple scenes, it would be more convincing to show a scene that's more elaborate than current algorithms support on modern hardware.
However, bear in mind that it's not exactly fair to compare Algorithm X on normal hardware to Polygons, since most people these days have EXTREMELY accelerated hardware for drawing polygons.
However, for something as simple as the demo on this page, there's no reason to think he didn't cheat a whole lot, and just use a simplified voxel approach (we can't even change the camera's pitch or roll!)
Re:What it really is: (Score:2)
Re:If the article is correkt... (Score:2)
"He just passed through Silicon Valley last week demonstrating his homemade graphics engine, and everyone from the designers at Nintendo to programmers at Apple has been left in shock."
Yeah, and after viewing the demo I could easily see it continue as:
"...left in shock. They were heard to whisper, 'Is this guy serious?'"
Re:Constructive Solid Geometry (Score:2)
Re:Sometimes ray tracing is the fastest algorithm (Score:2)
I believe you are refering to "ray-casting" not raytracing. they are very similiar in that they both send out "rays" from the viewers FOV to the object being viewed. But ray-casting does so in groups, resulting in blocky images. Raytracing casts rays precisely on a per ray basis, which requires a lot more calculations and results in a much more accurate representation of the object being viewed.
Not a very good definition but I hope it clarifies the differences and why raytracing is much more math intensive then raycasting and why D00M and wolfy did NOT utilize this technique.
One of the problems with raytracing is that it doesn't scale very well. increasing the resolution of the scene being rendered requires more then just increasing the polygon count. Polygon manipulation can be compensated for mathematically. unfortunatly for raytracing increased size just results in increased complexity. There is no easy way around it. Of course I am no expert on the matter. Nonetheless That is why it is important to develop methods of computing rays at the hardware level.