Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

New Graphics Firm Promises Real-Time Ray Tracing 136

arcticstoat writes "A new graphics company called Caustic Graphics reckons it's uncovered the secret of real-time ray tracing with a chip that 'enables your CPU/GPU to shade with rasterization-like efficiency.' The new chip basically off-loads ray tracing calculations and then sends the data to your GPU and CPU, enabling your PC to shade a ray-traced scene much more quickly. Caustic's management team isn't afraid to rubbish the efforts of other graphics companies when it comes to ray tracing. 'Some technology vendors claim to have solved the accelerated ray tracing problem by using traditional algorithms along with GPU hardware,' says Caustic. However, the company adds that 'if you've ever seen them demo their solutions you'll notice that while results may be fast — the image quality is underwhelming, far below the quality that ray tracing is known for.' According to Caustic, this is because the advanced shading and lighting effects usually seen in ray-traced scenes, such as caustics and refraction, can't be accelerated on a standard GPU because it can't process incoherent rays in hardware. Conversely, Caustic claims that the CausticOne 'thrives in incoherent ray tracing situations: encouraging the use of multiple secondary rays per pixel.' The company is also introducing its own API, called CausticGL, which is based on OpenGL/GLSL, which will feature Caustic's unique ray tracing extensions."
This discussion has been archived. No new comments can be posted.

New Graphics Firm Promises Real-Time Ray Tracing

Comments Filter:
  • One step closer (Score:4, Interesting)

    by flewp ( 458359 ) on Friday March 13, 2009 @07:20PM (#27188007)
    Being a 3D artist (mostly just a modeler and texture artist, but sometimes a generalist), I'm happy to see work like this being done. It seems like only yesterday I was waiting hours or all night for simple ray traced scenes.

    While it may be underwhelming to some, I'm more than happy to see people working on this kind of tech. Sure, we've moved on from just "simple" ray tracing to using things like GI, etc, but in time we'll have that in real time as well. Some apps are already doing some tricks to enable real time GI and other tricks. (the key word being tricks, since they're not totally physically accurate). Obviously real time will always lag behind, but I look forward to the future.
    • You mention that things have 'moved on' from ray tracing to GI - but keep in mind that most GI methods (and certainly QMC sampling) -are- largely based on ray tracing. When people say 'ray tracing', we're not just talking about chrome spheres or perfect glass..glasses. It's the fundamental concept of 'tracing a ray' in the scene - and that fundamental concept applies not just to direct surface (illumination) calculations and reflections/refractions, but also to fuzzy reflections/refractions, area-sampled

      • Re: (Score:3, Insightful)

        by flewp ( 458359 )
        I'm fully aware of that. Notice I didn't say we've moved on from ray tracing to GI. I said we've moved on from "SIMPLE" ray tracing - the operative word being "simple". Perhaps I should have been more clear and said "we've moved on from just basic raytracing to more advanced and accurate methods of ray tracing", but I figured my point was clear enough.
  • 2009 (Score:5, Funny)

    by stonedcat ( 80201 ) <hikaricore [at] gmail.com> on Friday March 13, 2009 @07:21PM (#27188021) Homepage

    2009 is the year of the ray traced desktop.

    • Re: (Score:3, Insightful)

      by Tablizer ( 95088 )

      2009 is the year of the ray traced desktop.

      Can't wait for the ray-traced BSD desktop version of Duke Nukem Invents The Flying Car.
           

    • Re: (Score:2, Funny)

      by Nemyst ( 1383049 )
      Ya, my bluescreen never looked so pretty!
    • finally!
    • Re: (Score:3, Interesting)

      by jellomizer ( 103300 )

      When will real time ray tracing happen... Real time will need to be above about 30 frames/second.

      Lets say back in the year 2000 it took 1/2 an hour to render a high resolution complex page.
      So we will apply Mores law With a conservative approach to it. lets say they double speed every 24 months (makes the math easier too)

      2002 it would take 15 minutes
      2004 it would take 7.5 minutes
      2006 it would take 3.75 minutes
      2008 it would take 1.875 minutes
      2010 it would take 56.25 second
      2012 it would take 28.125 seconds
      201

  • Shitty summary! (Score:5, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday March 13, 2009 @07:21PM (#27188029) Homepage Journal

    Stop copying and pasting the article to generate almost the entire summary, especially when you don't do it right. The However, the company adds that 'if you've ever seen them demo their solutions you'll notice that while results may be fast -- the image quality is underwhelming, far below the quality that ray tracing is known for.' makes it look like you're talking about the Image quality of Caustic's new solution, which is obviously wrong. Here's the real paragraph:

    "Some technology vendors claim to have solved the accelerated ray tracing problem by using traditional algorithms along with GPU hardware," says Caustic, referring to companies such as Nvidia which recently demonstrated real-time ray tracing using CUDA . However, the company adds that "if you've ever seen them demo their solutions you'll notice that while results may be fast--the image quality is underwhelming, far below the quality that ray tracing is known for."

    In other words, it was someone at Caustic talking about everyone else's solutions, the opposite of the implication of the summary!

    • Thanks. Your clarification actually caused me to rtfa. I figured if the summary was actually accurate, there was no point in the article, as it was likely just a bunch of slop. And while it may still have been a bunch of marketing slop after all, at least it was interesting. :)

    • Re: (Score:3, Insightful)

      by arcticstoat ( 993717 )
      That's kind of my fault for submitting such a long summary in the first place - the original I submitted to the Firehose makes it clear what the quote is referring to, but that submission was obviously too long for a general summary. I take your point about copying and pasting, though - I'll be less lazy next time :)
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Perhaps, but it's mostly ScuttleMonkey's fault for posting such a misleading summary.

      • Ironically, I failed to get my HTML correct, which suggests to me that I also remind you of the value of preview.

    • Not surprisingly, that text from the article is actually copied and pasted straight from Caustic's web site [caustic.com]. Granted, the text appears in the article in the context of a quote, but the way it is presented makes it sound like they actually spoke with a representative of the company.
    • For all the crap that Roland took, at least you were guranteed that the summary on /. was not just a copy and paste of the first paragraph of the article. The copy and past annoys me to no end.

  • "Caustic"? (Score:2, Funny)

    by Tablizer ( 95088 )

    Do they get their chips from Flammable Systems, and their capacitors from Toxic Components Inc?

    • Re:"Caustic"? (Score:5, Informative)

      by flewp ( 458359 ) on Friday March 13, 2009 @07:25PM (#27188069)
      I assume you're kidding, but for the uninitiated: Caustics also refers to light reflected and refracted by a curved object. Think the pattern of light cast by a glass on your desk, or thrown off by a ring sitting on a surface.
      • Re: (Score:3, Informative)

        by Tablizer ( 95088 )

        I assume you're kidding, but for the uninitiated: Caustics also refers to light reflected and refracted by a curved object. Think the pattern of light cast by a glass on your desk, or thrown off by a ring sitting on a surface.

        Or the skewed image of a star caused by an imperfect telescope lens.
             

        • by flewp ( 458359 )
          Wouldn't that fall under "light reflected and refracted by a curved object"? Isn't a telescope lens just that? Or is there something else going on I'm not aware of?
          • Re: (Score:3, Insightful)

            by flewp ( 458359 )
            Er, I'm an idiot. I wasn't thinking, and realized just after hitting 'submit' you were just providing another example, apologies good sir.
      • by treeves ( 963993 )
        If they ever put in images in the Acid3 test (I guess it'll be Acidx where x>3) that require ray-tracing like this, Caustic Graphics solutions will be neutralized. Yuk yuk.
  • "Caustic" Graphics? Would you want to do business with a video company that names itself after a chemical that damages your eyes?
    • by DMUTPeregrine ( 612791 ) on Friday March 13, 2009 @07:34PM (#27188181) Journal
      Caustics are light reflected and/or refracted by curved surfaces. The pattern of light lines on the bottom of a pool is one of the more common types of caustic. The company chose a graphics term. The graphics people chose a term that has another, more understood meaning.
      • by Futurepower(R) ( 558542 ) on Friday March 13, 2009 @10:27PM (#27189599) Homepage
        Thanks for the explanation. However, names that require explanation are not good choices for the names of companies.

        I remember when I first saw a very poorly drawn, shaky image of an animal and read that it was a Gnu, and read how clever the name was considered to be since it was, they said, "recursive": GNU is Not Unix.

        It didn't bother the enthusiasts that most people in the world can't pronounce the name and have never seen a Gnu.

        They found someone with artistic ability to make a better image of a GNU [pasteris.it], but I've seen no evidence that anyone with technical knowledge realizes the depth of the self-defeat in choosing an obscure reference to an obscure animal.

        To most people the word "caustic" means only "capable of burning, corroding, or dissolving".
        • To most people the word "caustic" means only "capable of burning, corroding, or dissolving".

          And most people aren't their target audience.

          They're selling to people who do know what caustics are. And in their minds caustics means "Too slow to use in an animation."

        • But a graphics company choosing a graphics related name, their target market will understand what it means even if noone else does.

        • It seems like GNU was not self-defeating, and that the gnu is no longer an obscure animal. Other than that...

          Say, what was your point again?

          BTW, the meaning of "caustic" to most people doesn't have much importance since most people won't be the direct customers of Caustic Graphics. The name does have a lot of meaning to the company's potential market: caustics are generally the most expensive and often critical part of photorealistic rendering. A company that chooses that word as part of its name has go

  • by MostAwesomeDude ( 980382 ) on Friday March 13, 2009 @07:27PM (#27188095) Homepage

    They've advertised Linux support too, but I haven't heard anything from these guys. Unless they're like nVidia and sit around killing kittens all day, it would be a good idea for them to actually do some research and figure out how GLX and DRI work. Even the ATI closed-source drivers still respect the GLX way of life.

    (nVidia replaces the entire DRI stack. DDX, GLX, DRI, DRM, all custom. fglrx doesn't replace GLX. Just in case you were wondering.)

    • by Sycraft-fu ( 314770 ) on Friday March 13, 2009 @07:50PM (#27188361)

      Like with anything, I call vaporware until they show real silicon. Not because I think they are lying, most companies don't. However there are plenty of overly ambitious companies out there. They think they have figured out some amazing way to leap ahead and get funding to start work... only to realize it's way harder than they believed.

      A great example was the Elbrus E2K chip. Dunno if you remember that, it was back in 2000. A Russian group said they were going to make the Next Big Thing(tm) in processors. It'd kick the crap out of Intel. Well obviously this didn't come to pass. The reason wasn't that they were scammers, in fact Elbrus is a line of supercomputers made in Russia. The problem was they didn't know what they were doing with regards to this chip.

      Their idea was more or less to put their Elbrus 3 supercomputer on to a chip... Ok fine but the things that you can do on that scale, don't always work on on the microscale. There are all sorts of new considerations. So while their thing was all nice in theory on a simulator, it was impossible to fab.

      Intel and AMD aren't amazing because of the chips they design, they are amazing because they can then actually fab those chips economically. You can design something that'll smoke a Core i7 in simulations. However you probably can't make it a real chip.

      This smells of the same sort of thing to me. Notice that they have press releases and some shiny demo pictures, but it was clearly done on a software simulator. Ok well shit, I can raytrace pretty pictures. That doesn't prove anything. Their card? Apparently not real yet, the picture of it is, well, just a raytrace.

      So who knows? Maybe they really do have some amazing shit in the pipeline. Doesn't matter though, they've gotta make it real before it matters. nVidia releases pretty pictures too. Difference is the pictures of the cards are of actual cards, and the pictures rendered are done on the actual hardware.

      I am just never impressed by sites heavy on the press releases and marketing, and light on the technical details, SDKs, engineering hardware pics, and so on.

      • by Anonymous Coward

        http://www.youtube.com/watch?v=B3qtq27J_rQ [youtube.com]

        ( no, not a realdoll advert - it's a vid of their current test card being twirled around in a human's hands. then again, maybe they raytraced that )

        • big deal 1) take a nvidia card (or any other you want - obscure brand prefered) ,scratch any logo on it and "patch" it with your own logo 2) upload to youtube 3) ???? 4) profit! Im with the parent post - I was about to post it when I saw it already written: no SDK , no deal.
          • I personally know some of the guys involved (splutterfish). If they say that's the real card. That's the real card.

            The people behind this thing are relatively well known and respected names in the cg industry. They wouldn't be making these claims if it was a scam.

    • Re: (Score:3, Interesting)

      Your sig was quite funny after your post. :)
    • by neumayr ( 819083 )
      Yes well, and which approach is the one that actually works?
      nVidia's drivers have done their job for me, ATI's.. not so much.

      Sure, theoretically, it's better to stick with the given architecture like ATI's drivers do. I get that. But what good does it do for anyone if it hardly works?
      I'd rather replace half of X.org with nVidia's code if it means I get to use all my card's features.
      Isn't enabling vendors to do that actually one of the things open source advocates keep preaching about?
      • Re: (Score:2, Troll)

        But what good does it do for anyone if it hardly works?

        What's broken about it?

        nVidia's drivers have done their job for me...

        ATi has *NEVER* had good drivers. They fucking suck at writing drivers. They always have, and -if trends continue- always will.
        nVidia's rewrite of the majority of X.org graphics bits fails 'cause it's an ongoing *massive* duplication of effort. When the x.org folks put bugfixes or enhancements in to some component that nVidia has duplicated in their driver, everyone who depends on nVidia's software has to wait and see if nVidia will be arsed to fix their code. When everyone but nVidia

        • by neumayr ( 819083 )

          You feel the future of X.org is threatened by nVidia's policy? In what way?

          In the first part of your post you seem to be saying it's the nVidia users who're out of luck whenever new X.org bugfixes/features aren't ported. In which case I don't see how they're holding back the project.

          Then you go on saying that nVidia never helps out open source and basically accuse them of being damaging to X.org's health. Which I don't get. As I understand it, it's not X.org which depends on binary-only software, it's nVidi

          • First off, what's currently broken about the bits that nVidia has reimplemented?

            In which case I don't see how they're holding back the project.

            Meh. You're right I was overreaching. The only thing that I have to that _remotely_ supports my position is a post by Aaron Siego:
            http://aseigo.blogspot.com/2008/09/on-kde4-performance.html [blogspot.com]

            The money quote is here:

            This isn't the only issue in x.org, but it sort of highlights one of the big ones: x.org has some pretty big issues when it comes to doing graphics. That's why nVidia includes in their driver a rewrite of pretty much every bit of x.org that touches graphics. This in turn causes havoc of a new variety: does nVidia's twinview map nicely to xrandr/xinerama or does it get screwed up? (Answer: often the latter.) Issues that get addressed in x.org need to also be fixed in the nVidia driver if they exist there too, and vice versa. It's just not pretty.

            This is one of the primary reasons why I'm very excited about Gallium3D: it's a modern graphic stack done by graphics gurus that is designed for the real world of hardware. I've seen it action, and it's impressive.

            If I understand correctly, nVidia was (and still is?) pouring a lot of effort into rewriting x.org features, then keeping the improvements to themselves. They could be better citizens and distribute their modifications

            • by neumayr ( 819083 )

              First off, what's currently broken about the bits that nVidia has reimplemented?

              I don't know, I don't have any insight into X.org, and the issues it has concerning graphics. I've read somewhere its asynchronous nature is to blame, and that makes sense to me. But it's not something nVidia's drivers fix.
              The post you quoted OTOH does mention there are issues that nVidia fixes with their approach.

              I'm not pushing the "nVidia is evil" POV. I'm pushing the "Relying on closed-source components is foolish" POV.

              Often it really is foolish, and one could be tempted to turn it into a principle. The FSF most likely shares your view.
              Sticking to that principle is fine.
              I'm just not that much of a black and w

    • by fat_mike ( 71855 )

      Yep, sounds like every other Linux project to me.

      Announce something cool
      Ask the community to donate time
      Sit back and watch Linux users bitch about no release.

  • Larrabee has a dedicated z-buffer module, there was no place for bringing it up in the article...Plus, much research has already been done in this are which the article didn't cover. Here's an example: Toward a Multicore Architecture for Real-time Ray-tracing [wisc.edu] -- this architecture benefits from secondary rays by equipping each tile with a shared L2 cache and exploiting locality
    Also, 20% increase isn't much....really. With software simulators of new architecture, something between 10-20% increase in speed is
  • by j1m+5n0w ( 749199 ) on Friday March 13, 2009 @07:51PM (#27188369) Homepage Journal
    The article sounds like a press release, and leaves a lot of questions unanswered.
    • What sort of performance should we expect?
    • What are the limits on scene complexity?
    • Can their product handle dynamic scenes?
    • Is the process of sorting a collection of triangles into an acceleration structure done in software, or in hardware?
    • Do they support photon mapping, metropolis light transport, radiosity, path tracing, or any other global illumination algorithm?
    • How does the performance compare with high-performance software renderers like Arauna (open source) or OpenRT (closed source)?
    • How does the image quality compare with high-quality renderers like pbrt (open source)?
    • What geometric primitives are supported?
    • What sort of textures are supported?
    • What algorithms do they use? MLRTA? Packet tracing?
    • Do they use a Kd-tree, a BIH tree, BVH, or something else entirely?

    I shall remain skeptical until more information is forthcoming.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      performance: 20x speed-up ("from what" is unanswered at this time) to 200x speed-up down the line
      limits: limited more by your machine than the card
      dynamic scenes: it's an accelerator - if the renderer can, then it still can with this card
      sorting (accelerations structure building, I think you mean?): wouldn't know but seeing as it's supposed to accelerate the ray tracing process, I would imagine it's either on the card or via their own algorithms in software
      photon mapping/MLT/etc.: it's an accelerator. If t

    • Re: (Score:1, Funny)

      by Anonymous Coward

      I hate to burst your academic bubble, but MLT has approximately zero use to any production-quality renderer.

      • I only mentioned it for the sake of completeness. I've never tried implementing it myself for my own projects, and don't plan to. However, I understand that it converges faster than photon mapping for some scenes lit by light sources that are mostly occluded, like light from underneath the crack of a door. In the photon mapping scenario, few of the photons would contribute to the final image.

        Movie studios and the like may not care about this, as they can just manually position their lights so this isn'

    • Re: (Score:3, Informative)

      You're asking a couple of incorrect questions.

      This isn't a renderer. This is a render accellerator.

      The idea is that Brazil, Mental Ray, Vray and FR can use this to accellerate the existing renderers without any sacrifice of quality or features.

      Think of it like SSE3. It's a new instruction set you can use to accellerate your software. It's not a hardware renderer. It's a hardware ray tracer. The distinction is subtle but in important in this case.

      It should also be noted that Splutterfish (the makers of B

  • Sounds like a plain old accuracy vs. time trade off. For a pixel in a given frame they choose some reflected/refracted rays to follow. They add noise or dither to their ray selection process so over time a pixel will converge to a nearly correct value. Moving items won't get an exact solution right away but they're moving so the viewer won't notice that the shadow isn't quite dark enough immediately or something in the mirror got a little jaggy for 3 frames.

    In most games, the viewer moves more than objec

  • by Toonol ( 1057698 ) on Friday March 13, 2009 @08:55PM (#27188961)
    At some point (not too far away), the average size of a polygon in a scene will drop to one pixel or smaller. It seems like the different rendering techniques will merge together... a bit like the strong, weak, and electromagnetic forces merged.
    • by mdwh2 ( 535323 )

      I don't see how - surely, ray tracing involves shooting out rays for each pixel to see what objects they hit, and then tracing out additional rays in turn from that point. Current methods involve drawing the objects directly and seeing which pixels they fill. I don't see how doing the latter with polygons smaller than one pixel makes it like ray-tracing, anymore so than any other per-pixel level method such as texture mapping.

      A better example for merging of algorithms would be displacement mapping [wikipedia.org], which ca

    • The basic difference in pseudo code:

      • Rasterizer:
        for (i in polygons):
        for (j in raster positions):
        if (ray from j hits i):
        draw pixel at j
      • Ray-tracer:
        for (j in raster positions):
        for (i in polygons):
        if (ray from j hits i):
        draw pixel at j

      Of course this is a huge simplification. Both rasterizers and ray-tracers optimize their inner loops, the f

  • by master_p ( 608214 ) on Friday March 13, 2009 @09:01PM (#27189017)

    For something as ambitious as they have, it's very strange that their web site has no demos, absolutely nothing, of their products. No pictures, no videos, nothing.

  • Finally, 3DRealms can release DNF...it will only work with Caustic graphics cards, but it will have the absolutely bestest graphics this side of a Phantom console.

  • They say it's patent pending, but I can't find the patent application on the USPTO site. Anyone have better luck? I'm just curious how the hell they deal with the incoherency of secondary rays.
    • from the uspto.gov FAQ
      "Most patent applications filed on or after November 29, 2000, will be published 18 months after the filing date of the application.... Otherwise, all patent applications are maintained in the strictest confidence until the patent is issued or the application is published."
      This means the application is not available to anyone during that period (unless the application is issued earlier and thus becomes public).
  • I would just like to point out, Ray tracing is not some holy grail of perfection, far from it. Indeed buck for buck, rasterisation provides the same or higher image quality for a much lower cost.

    Now obviously there are instances where raytracing helps, reflections and refractions can be generated on a per-pixel bases rather than rendering the reflection/refraction as a separate image and stretching/squishing said images in order to produce a similar effect. But saying this, if you render these separate im
    • Re: (Score:3, Interesting)

      by Rockoon ( 1252108 )

      To raytrace a soft shadow you have to send out at least 16 rays per shadow calculation, for each light and even then your gonna suffer from nasty artefacts. Compared to the raster solution which involves rendering the zbuffer of any given light source and merely doing some blurring. same quality, much reduced cost.

      It seems to me that the algorithmic complexity grows just as fast for both rendering techniques in the case of many lightsources. Both are accomplished in steps linear to the number of lights.

      Its all well and good that rasterization is "fast" for what we use it for today. But, its growth is linear to the number of primitives while there are other methods that are sublinear. For a large enough number of primitives the sublinear algorithm must be superior in performance.

      • by grumbel ( 592662 )

        But, its growth is linear to the number of primitives

        Is that much of a problem when the hardware gets faster at exponential rates?

        The whole problem I have with all this raytracing buzz is that so far, it hasn't produced even a single game or realistic tech demo (no, Quake4 with shiny spheres added doesn't cut it).

        Todays games haven't been plain rasterizers for a long time, thanks to shaders and all the post processing they allow, yet when raytracing and rasterization is compared always the most basic form of the algorithms is compared and not what is used in

        • Yes it is a problem.

          All other things being equal, doubling the computing power of a raster card will net you double the number of primitives.
          All other things being equal, doubling the computing power of a raytracer card will net you the square of the number of primitives.

          As soon as these two technologies are on par with each other, rasterization dies on the following hardware generation.

          If a raytracer can handle 1 million objects in realtime, then it only takes a 5% computational performance improvem
    • The raster version of your shadow example is going to look horrible in comparison to the raytraced version. Close to the shadow casting object, the shadow should be sharper than far away. With raster you get an ugly uniformly blurred shadow no matter what the distance is.
      • by moogord ( 904702 )
        Actually there are shader techniques to allow for just this using raster techniques, such techniques are used in modern games. Take a look at the nvidia GPUGems books for examples (i forget which one, at a guess the second).
  • Doesn't OpenGL 3 support real time raytracing already if you feed it enough hardware? Or did that not materialize in the final spec?

    • Re: (Score:3, Funny)

      by kimvette ( 919543 )

      Didn't OpenGL always support real-time raytracing if you throw it enough hardware? Unfortunately "enough hardware" to render complex scenes in real time has not existed yet.

      Can you magine what kind of 3D modeling rig god has? Somehow I don't think it's based on an ATI or nVidia chipset. ;)

  • ...massive list of failed graphics companies trying to do something novel in the last 10 years...

    Seriously, can anyone name a single company that has made inroads into the nVidia/ATI duopoly? I can probably name a half dozen who have tried...

  • Never trust a company that puts its name into just about each of its products. That is just lame, and there is no reason the product should not turn out to be just as lame. With attitude like that, there seems to be a lot of immature pride in that startup. They have probably hit gold in some calculations/algorithm and rushed to announce it will change the world. The truth is probably much more modest - they do have some technology or IP to offer, but it will require a lot of effort and hard work to make a d

  • I promised this 10 years ago, and where is my press? Pfft.

  • I like how there are no demos, screenshots, pictures, etc. Just words.

    Haven't we seen this before? Like, we totally discovered cold fusion in 1989. It was announced as true, so it must be!

    So this year we'll have fully-raytraced high-def images at 30-60fps. Obviously it'll happen. They told me so.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...