Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

How Bump Mapping Works 56

The Chef writes: "Tweak3D has a pretty good article explaining how bump mapping works with 3D accelerated video cards. They cover all the basics of bump mapping and the advantages and disadvantages of several methods. Now if someone asks me how per-pixel shading or environment mapped bump mapping works, I'll have an answer (but I'm not sure if that's a good thing)." With the introduction of the new graphics cards, this is some interesting reading.
This discussion has been archived. No new comments can be posted.

How Bump Mapping Works

Comments Filter:
  • Real "hard" core gamers turn off every special effect and level of detail to maximize frame rate, no matter how powerful the machine. All things being equal, the person with the higher frame rate (and ping) wins. I'm guessing that if the renderer actually allowed simple wire frames with basic z buffering, "hard" core gamers would set it that way.
  • It's often interesting to compare the effects of
    various rendering models. Bump mapping,
    combined with texture mapping, can indeed be
    quite impressive (and also pretty efficient).

    However, displacement mapping adds a further
    refinement. This is when a physical (positional)
    displacement is specified for the surface and
    then rendered. It would often be used in
    conjunction with texture- and bump-mapping.

    If you look at a bump-mapped surface, you might
    perceive apparent depth in the surface features.
    But, without displacement mapping, those features
    will have absolutely no effect on the silhouette,
    since they don't affect the geometry.

    I believe I saw this in some of the stills from
    the BBC "Walking with Dinosaurs" video; a
    dinosaur with a very wrinkled and bumpy skin had
    a perfectly smooth silhouette. It still looked
    stunning.

    This can also be seen if the bump mapped surface
    appears to have protrusions and the surface is lit
    by a beam from near the tangent plane of the
    surface. With displacement mapping, that bump
    would be "real", and would therefore be able to
    cast a shadow on the surrounding surface. With
    just bump mapping, the "bump" doesn't protrude,
    so it wouldn't cast that shadow.

    Both of these examples are similar in character,
    and both will not tend to jump out at you unless
    you go looking for them.

    Over time, more and more of these refinements
    (generally developed for software by the early
    1980s) are rolled into graphics cards as standard
    features. I wonder how long it will be before
    real-time radiosity rendering costs $150 for
    your desktop.
  • Rejecting as many of those as possible as quickly as possible ... is problem which many bright people have been hammering on for 30 years now.

    You mean stuff like SurRender Umbra [hybrid.fi]? Yes, research is being done. :) While this is something of a shameless plug, Umbra is not vaporware, and in fact the component was completed by the end of last week and is ready to ship. It took several man-years to build, so it's not exactly something you write out of a hobby. There's also a fairly in-depth technical overview [hybrid.fi] on the site. From that document, I quote:

    • "Umbra uses a combination of several new algorithms to perform the visibility determination as quickly and reliably as possible. The sole target of the library is to produce the tightest possible set of visible objects in the smallest amount of time.

      The library uses a number of techniques and data structure organizations that makes the visibility determination process output sensitive. Output sensitivity means that the time used to solve a problem, i.e. visibility evaluation, is dependent on the size of its output (number of visible objects) rather than its input (number of objects in the scene). "

    Another company, Fluid Studios [fluidstudios.com] is also taking a shot at the same problem, although they go for eliminating individual triangles instead of models/components.

    While this kind of dynamic methods are certainly more expensive to perform at run-time than pre-calculated visible sets, they are not mutually exclusive with those. You can still use portals and cell based visibility as well as static PVS if desired, but what's important that we're finally reaching a point where you don't need to.

    It's all about freedom, in the end.

    Jouni
    --
    Jouni Mannonen : 3D Evangelist @ SurRender3D.com [surrender3d.com]

  • The whole point of computer games is to lose yourself in the experience.

    compare "I pressed this button, these pixels move up the screen and caused those pixels to move in various directions. My score increase by 1"

    to

    "I pulled the trigger and put a rocket launcher right in his back. Gibs _everywhere_. One more frag for me!"

    If the game looks more like real life (or at least more like the Matrix), it's easier to get involved.

    Of course, it helps if the gameplay rocks too, but graphics can add an awful lot.
  • The entire universe is fractal... Ferns grow by a simple fractal algorithm. seemingly random anything, from data bit errors to turbulence to the bursty asymetric eb and flow of the internet "tide". All natural shapes can by reduce to and created from fractal algorithms... so says chaos theory, overused and way to "trendy" as it is... it does apply to Everything.

  • Boards existed and were in use by a limited group of developers. The board layout was incredibly neat and clean compared to anything else on the market. Bump mapping didn't cost much more in terms of fill rate than multi-pass texturing did, so frame rates were good.

    Bitboys may be notorious for not shipping production boards, but they've been very productive for the past years.

    I'm sure a product will ship when it's done. :)

    Jouni
    --
    Jouni Mannonen : 3D Evangelist @ SurRender3D.com [surrender3d.com]

  • by JosephMast ( 197644 ) on Monday June 12, 2000 @11:45PM (#1006750)
    for all the hard core gamers out there... does this sort of technology really help make better games?
    On one hand I really am impressed with the impressive graphics that are apparent in the majority of today's games, and perhaps this will allow game designers to think less about the technology (even "bad" code can hit 60fps with a smoking graphics card) and work on the game...
    On the other hand, I think that technology shouldn't be the focus of a good game (aka who cares if the orange is triple pass bump mapped at 140 fps if the game isn't fun to play) while games like diablo (640x480 resolution) still get dusted off and played.... any thoughts?

  • And there are better books :)

    IMO time, but this was my major area in college.

    The first Foley and Van Dam didn't deserve it's reputation. It was very basic and pretty empty. if you have studied Linear Algebra, you already knew most of what it had to offer. (And if you haven't you shouldn't be trying to write real 3D code.)

    The new edition is much better but still hardly the best book around. I've always preferred the Rogers book Procedural Elements for Computer Graphics. His organization is a bit awkward but there are lost of good discussions of the pluses and minuses of various rendering techniques and how to implement them.

    For specific algorithms, the Graphics Gems series is the BEST high-performance graphics reference around.

    I recently saw another excellent hard cover text but I'm having trouble pulling the name to mind.

    The moral is, go to the bookstore and read a good chunck of the boosk before you buy it. Don't just by VD because of its reputation.
  • if the renderer actually allowed simple wire frames with basic z buffering, "hard" core gamers would set it that way

    No. And that's because no consumer level card supports HW wire frame rendering. In 95% of all cases polygon rendering is faster than wire frame (some versions of GeForce support HW line rendering though).
    _________________________

  • "Per pixel shading" is getting there but still not a true Phong shade. Before lighting looka at all realistic the cards will have to reach the point of a real Phong shade.
  • This bothers me. Bump-mapping involves perturbing the geometry of a surface. What is being done in pretty much all of the examples is a clever lighting hack. It incorporates about half of the bump-mapping algorithm, but doesn't bother to perturb... This is annoying, as the actual process to shade the pixels is the same process that could be used to alter their location. Shame on the video-card manufacturers for not properly implementing this. Shame on the developers and manufacturers for propagating the belief that "EMBM" or other styles of shading is actually bump-mapping! Shame on us for being clueless!

    That being said, it's a nice fast hack to get something LIKE bump-mapping. Downside : extreme color washout (over-burning or over-dodging for the photo-shop crowd,eh?) Funky lighting considerations need to be taken into account. Oh sure, doing it properly would take a few extra clock-cycles, god forbid anyone would make CUSTOM hardware to do this... oh, wait... the equations almost exactly match what the texturing units do, with a few extra steps... oh wait... there are loads of texturing units all in parallel in hardware now, oh maybe we could get them to run in parallel to do real bump-mapping... gah!

  • What patent ? I haven't heard about that. Bump mapping as a technique is old (way older than Bitboys). Foley et al. from 1991 (IIRC) has it, haven't checked the older book. When OpenGL is a question of getting a standardized bump-mapping extension. DirectX just adds new functions and changes the version number, whereas OpenGL evolves with cleanly bundled extensions that are separate from it.

  • I'm not sure about this, but wasn't the Pyramid3D supposed to be produced jointly with Alliance Semiconductor? That was (I think) the company that designed the 2D core for the 3dfx that was incorporated into the infamous Voodoo Rush (3dfx later went back and created their own core for the Banshee).

    Then, after the Pyramid3D died, Alliance announced some sort of plan to develop their own graphics chipset, but nothing ever materialized.
  • I'm an 'old school' Quake player - Used to play in a bunch of the big tournaments/clans and such up until about the time Q2 came out.. then I 'retired'.

    It was standard practice to get the level of detail down as far as possible. There were about a dozen commands you would use in your config file to blur the textures, take out the 'realistic' movment (head bobbing, tilt when turning), and anything else that was even remotely extranious to the game (environmental sounds, etc.). I'm sure it still happens today with the first person shooters (Unreal, Q3, HL, etc).

    Graphic quality, to the 'professional' gamer, meant nothing - it doesn't add to the game and usually distracts you.

    Of course, the thing is, only about 5-10% of people out there are hard core gamers. You'ld show up at tournaments get put on some monster of a machine with every high-tech card in it that ran 100fps in 640x480 and people would start to cry when you executed your config file that turned the game into a mess off blurry green/brown squares.

    Of course in that sense computer games are like movies. 90% of the people want the razzle dazzle. That's how you sell the product. You could probably make a good comparison between Lucas and Carmack who try to push the technology because they're interested in it and aren't quite so focused on just the story. The hype surrounding them based on the 'effects' they're using is massive.. but the actual end product isn't what the 'hard core' people are interested in. (Hard core movie people being your artsy 'movie critic'... not your hard core Star Wars fan :) )

    With Quake 2&3 I was always kind of shocked at the amount of work that they put into the models. It's utterly impossible (IMO) to notice the work put into them at all in a game. There was barely any noticable 'in game' difference between the models in Q1 to Q2 - and no gameplay difference at all. Fiend? Big tank guy? You want to know the one model that I was most ever impressed by? The DOOM II Arch Vile. Now that created a gaming experience :) You would wake up at night in a cold sweat thinking you'd just heard one :)

    Having said all that, I still like to have good effects in other types of games. First person role-playng games.. I'd love to see a re-make of Ultima IV-VI using a 3D engine. Maybe one day, in my little fantasy world, Carmack and Garriot will form their own company. :)

    Jason

  • It took several man-years to build, so it's not exactly something you write out of a hobby.

    Don't lie Jouni - I've seen your picture in the credits for INSIDE :) And that used SurRender the mighty 3d engine too yes.

    While this kind of dynamic methods are certainly more expensive to perform at run-time than pre-calculated visible sets, they are not mutually exclusive with those. You can still use portals and cell based visibility as well as static PVS if desired, but what's important that we're finally reaching a point where you don't need to

    Cool - I look forward to seeing some new games with the new component.

  • Nope. The environment bump-mapping technique was originally a well known software technique often used by people in the european demo scene (also called the "euroscene".) Bitboys ky. who constructed Pyramid 3D probably took the algorithm from there, as many of them are known to be ex-sceners. (from the well-known group "Future Crew" if I'm not mistaken)

    My own opinion about bump-mapping and different APIs is that different hardware companies should stop creating different APIs for their different bump-mapping algorithms and instead converge on high-level interfaces + hinting.

  • Another angle in regards to this question:

    As a game developer is able to kick more and more polygons out at higher and higher rates, they can return to a fun little thing called...

    NEVERENDING, UNGODLY HORDES OF SHAMBLING EVIL!

    or, many more objects on the screen at a time. Remember back in the days of games like DOOM and Abuse when you'd walk into a room and...er...'stage a tactical withdrawl' with about 50 baddies on your ass. Until recently, 3D games could only muster a few enemies at a time without grinding the system down. As cards improve, though, expect to see more tactical withdrawl situations in 3D games...


  • Here is an interesting computer graphics technique using bump mapping for wrinkling developed in a japanese research center:

    Facial Animation using Bump Mapping [atr.co.jp]

    The results are quite impressive with facial animation and garment wrinkling...

    Zeb.
  • There is a way to abuse blending and texture mapping to fake out per-pixel in OpenGL. Check this [opengl.org] out. Basically, you calculate the impact of each axis as a seperate bit map.

    I recall long-ago, Brian Hook saying that Quake3 does 6 bump-map passes, so I wonder if they use this technique. There's a followup article on the OpenGL site using a 12(!) pass lighting method.

  • Really.

    I was told differently and couldn't find any support one way or the other both on Nvidias site and in that article.

    Do you have a good reference for the actual claculatiosn beign erpformed by the card? One has to wonder if this is the case why they didn't SAY Phong shader anywhere in their literature...

  • Bump mapping has always been a clever lighting hack. The algorithm in the leading 3D software packages is just better than they are incorporating into the hardware. Deforming the actual surface of a model, what you're talking about, is displacement mapping.

  • The lighting hack is "bump mapping". What you think is bump mapping is called "displacement mapping".

    Displacement mapping is a lot harder. You don't just "move the pixels". If you moved the pixel, you would leave a hole where the pixel was! What do you put there?

    Displacement mapping is usually done with adaptable subdivision, where the polygon is divided up into triangles, the corners displaced vertically, and they are divided, repeatedly, until the triangle is "small enough" that bump mapping can be used on it (deciding this is the tricky part). This could be done in hardware, and likely it will someday soon, but doing bump mapping in hardware should not be scoffed at.

  • by Perdo ( 151843 ) on Monday June 12, 2000 @10:22PM (#1006766) Homepage Journal
    Is hair just sharp bumps? Is there a limit how sharply convex a "bump" can be? Are we moving toward fractal geometry? big brushes for big objects, smaller brushes to add definition to the large brushes, bumps on the smallest brushes. Could fractal algorithms be used to generate complex shapes/textures without the standard 3D modeling technique of brushes + new bump mapping?

  • What I would like to know if it is more efficient to do this in hardware or software compared to the cost(in terms of hardware chip area) Are there any benchmarks available? Maybe it would be better to use the chip area for things more commonly used
  • by Anonymous Coward
    this strip [penny-arcade.com] at Penny Arcade [penny-arcade.com]:

    http://www.penny-arcade.com/dance.shtml [penny-arcade.com] .

    The rest of the site is funny as well, if you're into that sort of thing.

  • This is a bit offtopic but are u aware that recently a lot of work has been done which has proved that net traffic is fractal in nature. Till now all modellings and simulation of networks while designinfg them was done using a Poisson distribution but these findings based on trace tapes may change all that.
  • by Ella the Cat ( 133841 ) on Tuesday June 13, 2000 @12:16AM (#1006770) Homepage Journal

    I was in a hurry and my machine fell over as I was about to post this interesting link from the nVIDIA site [nvidia.com]. Lots to learn about therein. Sorry if I posted twice, give me a break.

  • No, hair is not sharp bumps. Bumpmapping is just a lighting trick. You're talking about displacement mapping which adds true geometrical complexity. In that setting hair could be sharp bumps, but unless you want your object to look like a cactus, you'd want som sort of nonlinear transformation there aswell. Usually hair is implemented thrugh either lots and lots of geometry, or a volume shader.

    Fractal algorithms have been used in computer graphics modelling for ages, not in consumer level hardware yet though, and probably not very soon either.




    A penny for your thoughts.
  • As a game engine developer, now that the processors, graphics cards and geometry engines are getting faster and faster, we have to concentrate on improving the graphics in order to stand up to the competition. This means putting all of the effects we can dream up into the game - some may add to the game itself while others are just eye candy, that's true.

    However, all this increased grunt also leaves a lot more processor power left for the game itself; which we are still learning to use fully in terms of advanced AI, better physics and other such things.

    All these things considered, it's true that a good game is a good game regardless of the beautiful graphics - how many people play Minesweeper, after all? I mean, my favourite game of all time is a old Sega Master System game. On the other hand, how cool was it in Zelda 64 when you got the lens of truth and used it for the first time, or in Aliens vs Predator when you first used the Predator's special vision - it all adds to the game as a whole.

    Finally, and rant over, with regards to 'even "bad" code can hit 60fps with a smoking graphics card' - I'm not so convinced - in anything but the most graphics-hungry FPS the game logic itself takes up a good percentage of the processor's power, and it's very easy to let that slip too in a "badly" coded game too - giving you a slow frame rate even on a GeForce 2'd up machine.

  • Is hair just sharp bumps? Is there a limit how sharply convex a "bump" can be?

    The bumps in hair are far too high-frequency for bumpmapping to work for it; at least not unless the camera is almost in the hair! Also bumpmapping tends to make things look a little more shiny than hair...well unless you take the 'because I'm worth it' effect in to play ;) Are we moving toward fractal geometry? big brushes for big objects, smaller brushes to add definition to the large brushes, bumps on the smallest brushes.

    Can't say much because of contractual obligations, but these kind of technologies will almost certainly be employed on the newer games consoles as polygon drawing ability far outstrips the RAM needed to store all the polys. Fractal and other procedurally generated geometry may well be the way out of this

  • Is hair just sharp bumps?

    Nope. Once way to make hair, start with a wooden log light by a spot in the middle (this will be the sheen), stretch it out and alpha it a few hundred times onto a surface with slight longitudinal displacements and slight bending - now it should like a sheaf of wheat. Take the resulting texture and map it two or three levels deep onto a hair-shaped object, using an alpha map to give it shape and taper. On the head place an opaque map, darkened, of the same texture so you don't get any bald spots. Presto! It looks like hair.

    This runs pretty efficiently in real time too, because you're just mapping a few 10's of textures to get the effect. The main disadvantage of this approach is that the highlights (sheen) don't move. A hack to make them move is to light the hair texture with a spotlight before alpha mapping it onto the hair polys.

    You could add a bump map to the hair texture and you would get the moving sheen.
    --
  • Apart from embossed bump-mapping (which doesn't look very good, IMHO) there are no other ways of doing bumpmapping in software without totally bypassing the graphics rendering hardware and plotting each triangle, pixel-by-pixel, totally in software.

    Bear in mind most modern graphics chips can multisample more than one multi-textured pixel per clock cycle, whereas the most basic optimised software renderer can manage much less, before you've even considered bump-mapping. And remember your processor cost about the same as your entire graphics card too :) (At least mine did when I bought it!)

  • by mav[LAG] ( 31387 ) on Tuesday June 13, 2000 @12:53AM (#1006776)
    ..is that it becomes more and more important to find and use good visible surface determination algorithms correctly in software. When you hear figures of 3 million polygons per second done by the latest GeeWhiz 2 GFX card, remember that has to be divided by 30 frames per second to get acceptable animation quality. SGI's InfiniteReality Engine pumps out 100 000 polys at 30 frames per second. Quite quick - until you realise that a complex model of an aircraft or a city may be comprised of tens of millions of polygons. Rejecting as many of those as possible as quickly as possible (normally because you can't see them from your viewpoint) is problem which many bright people have been hammering on for 30 years now.

    I know many graphics coders who are depressed because all of their hard-won knowledge coding polygon fillers, environment map effects and realistic shading engines in software seem completely superceded by advances in hardware. They shouldn't be. There's still tons left to research and better algorithms to be found - even more so now that more powerful graphics cards are becoming cheaper.

    There's zillions of good Web references on the subject - here [www-imagis.imag.fr] is a place to start.

  • If the game looks more like real life (or at least more like the Matrix), it's easier to get involved.

    Up to a point. As far as 3-D FPSs go, basic representation is enough, and the emphasis should always be on the gameplay, one of the reasons why Quake 1 is still far more involving than either of its successors. The blocky little moonwalking men in Quake (or even Doom's sprites) are just targets to hit. If you spend any amount of time admiring beautifully textured skins or the curved arches or the fog, lens flare et al you're dead.

    As the post above yours pointed out, most hardcore gamers turn the detail right down anyway. Graphically and gameplay-wise, the most important thing is framerate...

  • by DrSkwid ( 118965 ) on Tuesday June 13, 2000 @12:58AM (#1006778) Journal
    try readin :

    "Computer Graphics : Principles and Practice"
    Foley, van Dam, feiner, Hughes

    Mine is second edition 1993

    everything from lines to fractal hairs

    inc. anti aliasing & filtering etc.

    A must read for anyone more than slightly interested
    .oO0Oo.
  • ... a.k.a. Environment Mapped Bump Mapping are not actually found in Matrox G400 but in Pyramid 3D from Tri-Tech and Bitboys, the latter more known for their long-awaited and much discussed Glaze 3D [glaze3d.com] architecture. Matrox did the first hardware adaptation that ended up in the mainstream.

    Some years back, in the days of Pyramid 3D (yes, boards existed!) the pixel pipeline of the graphics chip was already programmable in microcode and EMBM was working perfectly in hardware. Slowly, maybe, by today's standards, but visually as attractive as ever. While it's a shame the boards never made it into the public, they still managed to make a significant contribution to PC graphics technology.

    Bitboys licensed the EMBM solution to Microsoft to make it a part of the Direct3D standard. Once it was a part of the standard, other vendors such as Matrox were also free to make their implementations of the method.

    It's a hack, but it's a good looking hack. Long live good looking hacks! :)

    Jouni
    --
    Jouni Mannonen : 3D Evangelist @ SurRender3D.com [surrender3d.com]

  • Yeah, I came to that realization in a bit of a haze after watching the movie "Pi".
    Math - it's not just for Bible Scholars anymore! :-)

    The Divine Creatrix in a Mortal Shell that stays Crunchy in Milk
  • by Anonymous Coward
    Sorry, Anonymous Coward here again ^_^ Anyway, it does not say Phong Shading, probably because its not traditional Phong Shading in the sense that you can flip a switch in OpenGL and have per-fragment lighting. However, using register combiners it is possible to do the dot product operation which is central to Phong Shading. The trick is to treat colors as vectors instead of colors. THis is the new big trend in graphics, to use textures and colors as functions or lookup tables instead of just pictures. The normal in Phong shading is interpolated across a triangle just like you would do with a color in Gauraud shading. With nVidia's register combiners you can treat the RGB values of a color like the XYZ values of a normal vector. You use Guaraud shading hardware to interpolate the vectors across the surface. The RGB values are not the normals of the surface exactly, but they have to do with how the light is oriented with the triangle. To get the normals of the surface, you use something called a 'normal map' to represent the bumpiness or smoothness of the surface. This is better than phong shading, because with Phong you really only can have smooth normal maps because you only know what the normals are at the vertexes. Normal maps give you per-pixel normals. So, you take the color/vectors calculated based on were the light is (these are guaraud shaded across the surface) and use the register combiners to dot product these with the normal map and you get true bump mapping (don't let people tell you this is not bump mapping, it is, they have it confused with displacment mapping, which is something that Pixar's Renderman does). You _could_ do phong shading if you used the secondary color to represent the normal, and guaraud shaded it across and did the dot product operation. nVidia's hardware is extremely flexible, it probably is not mentioned because their hardware can do it, but its just a side effect of having hardware that goes beyond simple phong shading. However, it would be nice to have Phong shading without having to set all this up yourself (i.e., just flip one switch in the API instead of 15 ^_^) Check out the MD2Shader demo at nVidia's web developer website to see the code and demo program for whta I am talking about (if you have a GeForce that is)
  • Actually, Foley and van Dam is not particularly useful here since it only describes the standard Bump Mapping technique which involves perturbing the normal vector before shading (page 744, second edition).

    This requires per pixel shading and hence hasn't been used in games so far (the article suggests that Geforce 2 would support it).

    Environment Mapped Bump Mapping is similar but offers more flexibility at some loss of shading accuracy (the distance from the light source cannot be represented).

    However, the article does look like a bit of a rehash of material provided by Matrox and NVidia, but I guess there is no harm in that.
  • Great info, thanks :)

    Still digesting it all. After you dot the bump map normal with the interpolated normal, what do you do with the final value, look it up ina pre-claculated shade table by treating the reult like a color index?

  • by Anonymous Coward
    I extrapolated from the interview by SharkyExtreme (SE )with Bitboys' CEO - Shane Long (SL):

    ====================
    SE: What is the difference between the 3DGlaze's environment bump mapping and Matrox's environment mapped bump mapping?

    SL: Matrox has a good implementation of environmental bump mapping. They are closely following the Microsoft guidelines, which Bitboys licensed to Microsoft. Without going into detail on our own implementation we will have equal quality, but at a fairly significant performance increase.
    ====================

    The phrase "licensed to Microsoft" indicates some kind of patent, at least to me.

    BTW, the interview is very interesting for anyone interested in future 3D graphics (albeit vaporware ;) ).
  • by Anonymous Coward
    Sorry,

    here's the URL:

    http://www.sharkyextreme.com/hardware/articles/b itboys_interview/
  • Probably - I got them from a paper on the subject which is ~2 years old. 208m polys per second sounds a little better - that's ~7m polys at 30fps. Does that figure include shading, T&L?

  • I hope they make a patch for DAIKATANA!

    Fh

  • Some years back, in the days of Pyramid 3D (yes, boards existed!) the pixel pipeline of the graphics chip was already programmable in microcode and EMBM was working perfectly in hardware. Slowly, maybe, by today's standards, but visually as attractive as ever. While it's a shame the boards never made it into the public, they still managed to make a significant contribution to PC graphics technology.

    So they actually made some prototypes? I remember looking at the early screenshots of the Tritech Pyramid 3D cards and wondering what the frame rate was with bump mapping, reflections and fogging on.

    On the flip side, I do wonder whether they will ever get a production board out of the door. The are notorious for not producing.

    Cheers,

    Toby Haynes


  • Three years ago Dimensionality was working on a game engine that was to use bump mapping (don't know which method Sean was using; it was in software). While the engine effort collapsed in January of 1998, we still have hundreds of bump-mapped textures gathering dust right now, waiting for a chance to be used in something.


    http://www.dimensionality.com/stuff/GTex0214.JPG is a sampler I made shortly after the collapse. It's located in our gallery section.


    Now watch me get moderated down for being on topic...

  • Wonderboy 3 - The Dragon's Trap :)
  • That was our course book but it uses pascal in the 2nd ed 3rd ed is better
  • yep. many of the posts expressing som kind of annoyance over this topic got voted down. even though the syntax might be offensive, the general message is that dedicated sites do a much better job at explaining graphics than slashdot ever will. But obviously someone looooooved the bad explanation of this technique.

    a link to the nvidia site got rated 4 because it was 'informative'. I say """""please""""". ofcourse it is informative, but people interrested in EMBM are likely to know about this technique long BEFORE they enter their slashdot accounts. Nvidia is the number one gfx hardware firm in the world. If you don`t visit Nvidia.

    I don`t mean to claim I can say which article can or cannot be slashdotted, but the fakephong version of this technique popped up around 1995 for the first time in a DOS demo called Juice, which you can download here [hornet.org]. So don`t tell me this tech is big news, because it isn`t. The only difference is the tech is now in hardware. Some BitBoys themselves made demo`s in 1989-1993, they knew where to get the juice.

    now, let the monkeys rate this rant offtopic, kill my karma, yeah. suckers.
  • We had a course on graphics at the college and we worked upto ray tracing but there was nothing on bump mapping and the like. I always wanted to work on game graphics and had no idea how to do it and to think I am about to finish my junior year of a Bachelors in Comp Engg. Suggestions on anything else that should be taught in a CS course?
  • Save your $$ Buy 3D Studio Max. Get good at it... be a starving artist... get great at it, get hired by Ion. If you have real talent, ID will hire you. If you are talented and don't want to make any money, NASA will hire you.

    BTW: 3D Studio Max costs $2,700.00 computer not included. Every Barnes & Noble has an entire section of books dedicated to teaching you 3D Studio Max. There are a hundred times as many help books floating around as there are copies of Max in circulation.

  • I remember seeing Bump Mapping done in software back in 1992. I still have a video tape of the animation (each student group was allowed 450 frames on a write once video laserdisk. It was also quite interesting seeing what people could do to stretch out 450 unique frames).
    I'm actually impressed that it took this long for the concept to go from software to hardware.
    --
  • by Anonymous Coward
    Since the introduction of Matrox G400 (see the fantastic shots at http://www.matrox.com/mga/feat_story/mar99/slave0_ scrshot.htm) Environment Mapped Bump Mapping has been a must.

    The big issue is when OpenGL, and thus Q3A et al., will have it. Today it seems to be an M$ ballpark only; with the help of Bitboys' patent.
  • by ghoul ( 157158 ) on Monday June 12, 2000 @10:34PM (#1006797)
    here [tweak3d.net]
  • Doh! Let this be a lesson to me; use the darned preview button, that's what it's there for!

    I missed the closing italic tag - the last paragraph is mine and not quoted - apologies!!

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...