Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware Science

3D Raytracing Chip Shown at CeBIT 391

An anonymous reader submits "As noted at heise.de Saarland University is showing a prototype of a 3D Raytracing Card at CeBIT2005. The FPGA is clocked at 90 MHz and is 3-5 times faster in raytracing then a Pentium4 CPU with 30 times more MHz. Besides game engines using raytracing there was a scene of a Boeing with 350 million polygons rendered in realtime."
This discussion has been archived. No new comments can be posted.

3D Raytracing Chip Shown at CeBIT

Comments Filter:
  • by Anonymous Coward on Tuesday March 15, 2005 @12:19AM (#11940492)
    of the avi clips!!!
  • Re:Hardware encoding (Score:5, Informative)

    by foobsr ( 693224 ) * on Tuesday March 15, 2005 @12:22AM (#11940506) Homepage Journal
    So, the question is: Can these guys get ATI or nVidia to buy their chip?

    They are trying (surprise, surprise).

    From their site (already melted (yes, yes, mirrordot)): We are very much interested in evaluating new ways for computer games and therefore like to cooperate with the gaming industry. Thus if you are in such a position, please send us an email!

    CC.
  • Sweet deal! (Score:2, Informative)

    by dauthur ( 828910 ) <johannesmozart@gmail.com> on Tuesday March 15, 2005 @12:26AM (#11940527)
    350m polygons is god damned amazing. I'm sure the kids at Havok will find good ways to implement this, and I'm sure Abit, Biostar, etc will too. I wonder how long it will take for them to make a PCI card you can put in as a graphics-card-booster... or maybe even USB? This technology is extremely exciting. It would definitely lessen the load off of g-cards, and drastically improve the framerate.
  • Re:Hardware encoding (Score:4, Informative)

    by TLLOTS ( 827806 ) on Tuesday March 15, 2005 @12:28AM (#11940534)
    It's doubtful that people from ATi or nVidia would have anything more than a casual interest in such a chip. More than anything with the way GPU's have been heading, this device would be more of a backwards step than a forwards one. GPU's these days are far more flexible than they used to be, and there's every indication that trend will only continue, allowing developers to do what they want with the hardware, rather than being told what they can do with it and having no real choice either way.

    So to sum up, don't expect to see vastly specialised GPU's for raytraycing hitting the market, at least not for the mainstream buyer. It's more likely that we'll see GPU's become more generalised to the point where raytracing can implemented on software. Will they be as fast as a purpose built chip like this? No, more than likely they won't. Will developers be able to do a whole lot more with them? Most definitely, and though that will come at a significant performance penalty for the moment, I think it's the right trade off to make as we should see far more creative uses of hardware put into practice, such as work being done already to use GPU's for something other than Graphics Processing.
  • Re:Performance (Score:2, Informative)

    by Anonymous Coward on Tuesday March 15, 2005 @12:46AM (#11940622)
    Dude, way to miss the point entirely. First of all, this is a prototype, not a production model. Second, according to TFA [uni-sb.de], "Nvidia's GeForce 5900FX has 50-times more floating point power and on average more than 100-times more memory bandwidth than required by the prototype." In other words, imagine what this baby can do when backed up by today's graphics technology.
  • Re:A Boeing? (Score:4, Informative)

    by The boojum ( 70419 ) on Tuesday March 15, 2005 @12:50AM (#11940647)
    There's a standard model of a Boeing aircraft (777, IIRC) that's used as a something of a test scene in the computer graphics community. It's about 350M triangles (everything down to nuts and bolts, but modified so as not to give any trade secrets away) and over 4GB of data, so it gets used a lot for testing how the performance of an algorithm scales to large datasets.
  • by xtal ( 49134 ) on Tuesday March 15, 2005 @12:54AM (#11940666)
    Unfortunately the price is an order of magnitude (or two.. or three) too high for FPGAs to really be a consumer tech. The issue I think is an ASIC costs so little in volume, rather than spend all the money on an FPGA design that might be obsoleted next year anyway - a vendor is more likely to commit a design to silicon and then sell that.

    There's also the speed issue - I've spent DAYS of CPU time to get a design syntheized from VHDL for a moderately complicated IC built up from available cores.

    Factor in optimizing floorplans and the like, and you're talking about serious time commitments to optimize the hardware.

    It works; I've been paid to do it in the past; but it's not something I can see in the consumer market for the time being.

    An exciting hybrid is intersting though, putting silicon CPU cores on the same die with an FPGA. They've been around for awhile, and I haven't done any FPGA projects in ~18 months - but I haven't seen any real movement outside of areas where FPGAs are already popular.

    See Open Cores [opencores.org] (no, not sores.. :-) ) if you're interested in this - there is open source hardware out there, some really good designs at that.

  • Re:FINALLY (Score:2, Informative)

    by captain igor ( 657633 ) on Tuesday March 15, 2005 @12:54AM (#11940667)
    I'm actually on a reconfigurable computing project, which focuses on this, the simple fact is that right now we don't have good methods for generating HDL for a given task, as well as lacking the necessary tools to properly swap tasks into/out of an fpga while maintaining a reasonable communication speed to them. It's being worked on, but it will take time to develop the tools, and even longer before software makers start using them.
  • by The boojum ( 70419 ) on Tuesday March 15, 2005 @01:26AM (#11940806)
    I can't speak for the chip yet, but I've read the papers on their software system and know that it gets a good part of its speed by rendering triangles only. The secret is that they have a cleverly laid-out, cache-friendly BSP tree of the triangles and their code uses MMX SIMD instruction to intersect 4 rays at a time with the triangles in the tree. Rendering triangles only lets them get away with a more compact (and therefore more cache friendly) memory structure and lets them avoid the performance penalty of switching or [vtable] dispatching on object types -- it's all one very tight loop to walk the tree and intersect triangles.
  • Re:A Boeing? (Score:3, Informative)

    by The boojum ( 70419 ) on Tuesday March 15, 2005 @01:46AM (#11940877)
    Try http://graphics.cs.uni-sb.de/MassiveRT/boeing777.h tml when they're no longer being slashdoted.
  • Re:Anti-Planet (Score:4, Informative)

    by bani ( 467531 ) on Tuesday March 15, 2005 @01:46AM (#11940878)
    amd only "recently" implemented sse? they've had it since 2001.

    more recent amd chips have sse2, and sse3 on amd64 is just round the corner.
  • Mirror to video (Score:2, Informative)

    by DesiVideoGamer ( 863325 ) on Tuesday March 15, 2005 @01:48AM (#11940883)
    Here is a mirror [cmu.edu] to the video.
  • Re:Performance (Score:1, Informative)

    by Anonymous Coward on Tuesday March 15, 2005 @01:54AM (#11940912)
    While Intel didn't make a chipset that would handle 8x90Mhz Pentiums (IIRC there were many 2x Pentium 1 servers), there were 8x and 16x Pentium Pro (200, 233Mhz) boards.

    Raytracing is stupidly simple to parrallel, all you need is one common memory bank that can be accessed by all 8 RPUs (Ray Processing Units), and the code to do it... I mean, this would be so much easier to do than an 8 way (or even a 2 way for that matter) pentium setup that it's a joke.
  • raytracing (Score:1, Informative)

    by Anonymous Coward on Tuesday March 15, 2005 @02:00AM (#11940934)
    Raytracing scales linearly with processing power, but only logarithmically with scene complexity.
  • Re:Surprising (Score:2, Informative)

    by iduno ( 834351 ) on Tuesday March 15, 2005 @02:31AM (#11941028)
    If you have a look at the sample pictures they aren't extremely detailed compared to that used by pixar. To get something running near real time at the quality pixar uses you would need hundreds, if not thousands of better and more expensive FPGA's running concurrently to produce a reasonable frame rate. Considering some FPGA's can cost around $1000/piece for the better ones, this could make it too expensive to put together something at the present. While FPGA's are good for designing hardware to handle ray tracing, it would probably be cheaper to design and build a custom chip to do real time ray tracing that would be suitable for commercial use.
  • by donglekey ( 124433 ) on Tuesday March 15, 2005 @02:38AM (#11941057) Homepage
    Most ray tracing renderers trace triangles. Blue Sky's CGI studio is the only renderer that I have heard of that traces nurbs surfaces directly.
  • Re:FINALLY (Score:3, Informative)

    by Sycraft-fu ( 314770 ) on Tuesday March 15, 2005 @04:05AM (#11941342)
    One interesting place I foudn them used is high end Cisco hardware. The 6500 series layer-3 switches feautre Altera FPGAs on many of their boards, as well as ASICs and general prupose CPUs. I've never been able to find out why, if it's just cheaper than an ASIC, given the (relitively) low volume of production or if they actually update what they do in the field with new firmware releases.
  • Re:Performance (Score:1, Informative)

    by Anonymous Coward on Tuesday March 15, 2005 @04:25AM (#11941411)
    For a discussion of forwards and backwards ray tracing, see here [tuwien.ac.at].
  • Re:Sweet deal! (Score:1, Informative)

    by Anonymous Coward on Tuesday March 15, 2005 @04:34AM (#11941429)
    I'm not sure I've managed to explain myself very well, so feel free to call me shitcockhead, or whatever is popular these days.

    Okay.

    Hey, shitcockhead! It's only like the very first sentence on their website which mentions that they have a programmable shader implementation in the card:

    Visit us at CeBIT 2005 in Hannover, Germany. In Hall 9, booth A40 we are going to present the new version of our Realtime Ray Tracing Hardware Solution featuring fully programmable shading and geometry on our FPGA based prototype.

  • Re:oh yeah (Score:2, Informative)

    by Buzzard2501 ( 834714 ) on Tuesday March 15, 2005 @04:59AM (#11941491)
    Raytracing is not required for good graphics. Pixar's Photorealistic RenderMan didn't even have raytracing until version 11, which came out *after* Monsters, Inc.
    IIRC, Pixar previously used BMRT (which supported raytracing) along with PRMAN to generate some of the scenes.
  • Re:FINALLY (Score:2, Informative)

    by TDO48 ( 248810 ) on Tuesday March 15, 2005 @05:26AM (#11941573)
    Not true about the small number of rewrite cycles: large FPGAs (e.g. Virtex, Apex) are SRAM-based: their configuration is stored in a memory and you can reprogram it as many times as you want. Smaller devices (e.g. CPLDs) or configuration controllers do have flash, and them have limited reprogramming capabilities.
  • by fizze ( 610734 ) on Tuesday March 15, 2005 @07:21AM (#11941934)
    also view: http://science.slashdot.org/comments.pl?sid=142507 &cid=11941863 [slashdot.org]

    it's a virtex II 6000-4
    from some pdf at http://www.saarcor.de/pubs.html [saarcor.de]
  • Re:Hardware encoding (Score:5, Informative)

    by Hortensia Patel ( 101296 ) on Tuesday March 15, 2005 @09:57AM (#11942655)
    They have hierarchical structures to test for which group of triagles a ray may intersect and scales more like O(n log n).

    I assume you're talking about kd-trees... these do indeed offer very nice performance characteristics, but they're designed for static geometry. Efficient raytracing for dynamic geometry (moving or deforming objects) is AFAIK still far from "solved".

    If you add this to the fact that raytracing lets you have perfectly smooth non-polygonized objects

    and take away the fact that they don't particularly like the arbitrary triangle meshes that make up the vast majority of real datasets...

    Flexible and robust realistic reflection and refraction

    Yes, "Flexible and robust" is the killer. And not just for refraction/reflection; there's still no fully-general, clean, robust method of shadowing for rasterizers, and it's not for want of trying. Radiosity is a joke. Attempts to get realism out of current rasterization approaches are bodges piled on kludges piled on hacks. It became clear some time ago that the technology was heading up a dead end. Of course, so much has been invested in making that dead-end fast that it's going to be hard to take the performance hit of moving to a better but less optimized approach.

    I suspect we'll eventually end up with a hybrid, rather like current deferred-shading techniques. It'll be interesting to watch it all pan out.
  • by Montag2k ( 459573 ) <jgamage@g m a i l .com> on Tuesday March 15, 2005 @11:25AM (#11943279)
    Another reason that people don't use FPGAs that much in consumer applications is the security of the IP on the FPGA. They are loaded on power-on with a PROM chip and it is a somewhat trivial task to read the entire contents of the FPGA at power-on. This would be a nightmare for companies like Nvidia and ATI, who value their custom hardware.

    Fortunately, there are some [actel.com] companies [latticesemi.com] that are incorporating flash memory on to their FPGAs instead of using the standard [xilinx.com] SRAM. The problem is that flash-based FPGAs are usually a few generations behind SRAM-based FPGAs in terms of die size (and henceforth storage space and speed).

    I think that as flash-based securable FPGAs become more popular, cheaper, and less power consuming, we'll start to see cards for the computer that come with completely configurable hardware.

    -Montag

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...