Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Futuremark Replies to Nvidia's Claims 317

Nathan writes "Tero Sarkkinen, Executive Vice President of Sales and Marketing at Futuremark, has commented on the claims by Nvidia that 3DMark2003 intentionally puts the GeforceFX in bad light, after Nvidia had declined becoming a member of Futuremark's beta program. This issue looks like it will get worse before it gets better." ATI also seems to be guilty of tweaking their drivers to recognize 3DMark.
This discussion has been archived. No new comments can be posted.

Futuremark Replies to Nvidia's Claims

Comments Filter:
  • by Anonymous Coward on Tuesday May 27, 2003 @12:31PM (#6048860)
    It won't be fast enough next year.
  • open source (Score:5, Insightful)

    by phre4k ( 627961 ) <esbenp&cs,aau,dk> on Tuesday May 27, 2003 @12:36PM (#6048923)
    People keep begging that nvidia release their drivers under a open license. Well i guess we now know why they don't. /Esben
  • by Poofat ( 675020 ) on Tuesday May 27, 2003 @12:37PM (#6048933)
    This is why you need many forms of evaluations to properly test something. Just running one program to show you pretty pictures is not going to give any meaningful result. You need to stress test the card in other ways.

    And, since one of the main reasons people will buy this is to play flashy and pretty games, ignoring the performance in those games is rediculous.
  • by EXTomar ( 78739 ) on Tuesday May 27, 2003 @12:43PM (#6049000)
    The problem isn't that benchmarks lie. We all know they do. The problem is we don't know how they lie. Creating open source benchmark applications can show how the driver is excirsed so everyone who wants to know or learn where cards and drivers are strong and weak. Everyone is on the level if everyone can look at the code that came up with numbers. Not to mention there are things to learn from code in benchmarks that excirse the fringe elements of graphics cards and drivers.

    The alternative is what we have now: hand waving voodoo. Not only do we have to take the vendor's word they aren't monkeying around with the driver to match execution of the benchmark but now we have to question where the aligence of the benchmark makers.
  • Confused (Score:3, Insightful)

    by Otter ( 3800 ) on Tuesday May 27, 2003 @12:48PM (#6049060) Journal
    ATI came a cropper the same way. Futuremark saw an eight per cent decrease in the score of one benchmark, Game Test 4, when it conducted the test with a renamed executable rather than correctly titled code. ATI's fix, said Futuremark, contributed to an improvement of just under two per cent in the overall 3DMark 03 score.

    I'm confused about what this means. Is the 1.9% difference in ATI performance between Game Test 4 with correct and modified names, or between the current driver and an older version?

    Most people here seem to think it's the latter, and I'd agree that they did nothing wrong if that's the case. But it's not obvious to me that they're not accused of the same thing as NVIDIA.

  • Respect (Score:1, Insightful)

    by Anonymous Coward on Tuesday May 27, 2003 @12:53PM (#6049106)
    I lose respect for companies when I hear stuff like this. They should try to reorganize their best-practice protocols and rework their ethics. Then they should read more Scott Adams.
  • by sjelkjd ( 541324 ) on Tuesday May 27, 2003 @12:55PM (#6049122)
    3dMark isn't a standard. It's a business, who makes money by charging hardware developers LOTS of money to be included in their "BETA" program. In real life(TM), manufacturer-specific optimizations matter. Many games will look better and run faster if they use vendor-specific OpenGL extensions, for instance. For a gamer looking to buy the fastest card to run his favorite game, he should look for benchmarks on that game. FutureMark is trying to make a business by predicting behavior of games that aren't out. Well, either the game you want to play is out or it isn't. If it's out, buy your card based on benchmarks for it. If it's not, wait until it's out before you spend your money. There is no guarantee that 3dMark is a good predictor of DirectX 9 performance.
  • by Genjurosan ( 601032 ) on Tuesday May 27, 2003 @12:55PM (#6049127)
    When Quake III runs at 300 FPS on my system under my 9700 Pro with 4x AA, I could care less about 3DMark and what ATI or Nvidia tweak. If the games run smooth and they look good, then go with it. Truth is, the ATI looks better than the Nvidia card under QIII, WCII, JKII, and pretty much everything else I've been playing.

    The issue with low FPS is a game problem 9 out of 10 times. The faster the video card, the less the game development houses work to streamline and improve their framerate.
  • by UberLord ( 631313 ) on Tuesday May 27, 2003 @12:55PM (#6049129) Homepage
    Open Source benchmarks are a bad idea because Closed Source drivers can still be used which may or may not contain these "cheats".

    Better to have open source drivers so we can inspect the driver for cheating/optimisations.

    Infact, open source them all if the hard numbers are that important!
  • by homer_ca ( 144738 ) on Tuesday May 27, 2003 @12:55PM (#6049133)
    In one sense ATI's tweak is not as bad because they're still rendering the scene with full image quality, where NVIDIA is rendering with reduced quality. However, it's still deceptive because it's optimizing for the special case of a benchmark, and real games (or a renamed 3dmark executable) will run slower.
  • Cheating??? (Score:3, Insightful)

    by JDevers ( 83155 ) on Tuesday May 27, 2003 @12:56PM (#6049139)
    In what way is Ati cheating, really? If you think about it, virtually every modern processor does some minor instruction rescheduling right? Basically, Ati is doing this in the driver and not on-chip, that's the only difference. I'm sure in the next few generations of GPUs we'll see the introduction of hardware features like this. Once the pixel/vertex shaders get ironed out pretty well and a lot of people use them. Right now very few games really make use of them and they spend most of their time emulating hardcoded T&L which is again a part of the driver.

    Nvidia is cheating and acting like a child, er, large corporation...but that isn't at all what Ati is doing.
  • by stratjakt ( 596332 ) on Tuesday May 27, 2003 @12:57PM (#6049148) Journal
    All this does is make 3DMARK look worthless as a benchmarking app. All it has now is some value as a pretty looping demo or stress testing application. I run it to make sure the card works and the drivers are installed properly (as in runs all tests) and thats it. The little number it spits up at the end is worthless.

    I dont even bother with 3DMark scores when I read reviews, I skip straight to the tested games and get a look at the FPS at various levels of detail.

    Then it's easy to realize that card A gives 201 FPS, card be gives 199 FPS, and the answer is: buy whichever is cheaper.

    This gives me much more useful information that relates to what I want the card for - playing games.
  • by 222 ( 551054 ) <stormseeker@nOsPAm.gmail.com> on Tuesday May 27, 2003 @01:00PM (#6049176) Homepage
    Hey, i want to know the truth just as much as the next guy, but seriously.... does this seem odd that when Nvidia opts out of this hundred-thousand-dollar beta program, this happens?
    Ive read 5-6 reviews of the FX 5900 and everyone seems to think its great, and rightly gives Nvidia the 3d crown. (Especially concerning Doom ]|[ :)
    If you read the interview, its even brought up that the 5900 seems to do just fine in all other benchmarks, only futuremark seems to give it a hard time, and im not buying that crap about Doom 3 benchmarks not being readily available.
    If i remember, Toms had a good review of that....
  • Re:Cheating??? (Score:3, Insightful)

    by Anonymous Coward on Tuesday May 27, 2003 @01:01PM (#6049188)
    There is a difference between out of order execution as found on modern CPUs and what ATI is doing. The rearranging of CPU instructions is done on-chip, and is done no matter what application is being executed. What ATI did was hard code a rearrangement of instructions into their driver. Something like if(app=3dmark) swap(instruction1,instruction2),swap(instruction3, instruction4)... If the app being run isn't 3dmark3003 then no performance gain will be had. Now if ATI came up with a general algorithm to rearrange instructions FOR EVERY APPLICATION and either implemented it on the driver or in hardware, that would not be cheating.
  • Coincidence? (Score:4, Insightful)

    by Throatwarbler Mangro ( 584565 ) <delisle42 AT yahoo DOT com> on Tuesday May 27, 2003 @01:04PM (#6049203) Homepage
    What fortune should I happen to get at the bottom of the comments page?

    What upsets me is not that you lied to me, but that from now on I can no longer believe you. -- Nietzsche

    Classic.

  • by onyxruby ( 118189 ) <onyxrubyNO@SPAMcomcast.net> on Tuesday May 27, 2003 @01:05PM (#6049214)
    The less cooperation between the testing companies and the tested companies the better. The last thing this industry needs is to become like so many other industries where the test standards lose all merit because the testers and testee's are in bed togethor. Test results are only of merit of they are done completely independent of the manufacture during the entire test process.


    Think of it this way, when's the last time you saw PC World roast a product that truely deserved it? How many review sites gave WinMe a thumbs up when it's widely viewed in the industry at MS's worst OS to date? We (the public) simply aren't being served if the test companies are cooperating with the companies their testing. Look if a testing company, review site or whatever other lab doesn't occasionaly come out and just say "this sucks" than you know they aren't credible. There's too much out there that sucks, and too few reviewers willing to let the public know before they waste their money.


    It's the same reasoning that dictates why consumer's reports will buy their cars anonymously from dealers using third parties instead of getting "special" delivery directly from the manufacture. What we should really see with the behaviour were observing so far is an impetus to develop an open source test benchmark application. By doing this we would assure that the results can't be bought, just like has become common practice in so many other industries.

  • by Anonymous Coward on Tuesday May 27, 2003 @01:08PM (#6049244)
    You forget to mention that the benchmarks run weren't memory intensive, therefore no swapping occured on either machine so the operton might as well have had 512 mb, it made no difference.
  • Re:nVidia vs. ATI (Score:5, Insightful)

    by stratjakt ( 596332 ) on Tuesday May 27, 2003 @01:13PM (#6049284) Journal
    If it was a generic optimization (and probably should have been), there'd be no issue.

    ATI recognized the 3dmark executable and special cased for it. Which is misleading and wrong. The performance is enhanced for 3DMark and 3DMark alone.

  • by tomstdenis ( 446163 ) <tomstdenis AT gmail DOT com> on Tuesday May 27, 2003 @01:26PM (#6049390) Homepage
    I never put much stock into benchmarks. One test says one thing another something else.

    All this proves is that a benchmark is a highly isolated incident of observable performance.

    For example, most charts I see rate the P4 as "faster than the Athlon" at the same clock rate. Yet when I benchmarked my bignum math code I found that the Athlon completely kicked the P4s ass

    [K7]
    http://iahu.ca:8080/ltm_log/k7/index.html

    [P4]
    http://iahu.ca:8080/ltm_log/p4/index.html

    Does this single test prove the athlon is faster than the P4? Hell no. It proves that using portable ISO C source code todo bignum math is faster on an Athlon. If I used SSE2 the P4 would probably smoke the Athlon, etc...

    Can we stop putting stock into this BS?

    For the record I have a Ti200. Its decently fast [50fps at 1280x1024 in UT2] and there are no visible artifacts or other "cheats". It works nicely. So if nVIDIA cheated to make their 3dmark score better all the power to them. Screwing around with meaningless benchmarks is a good way to discredit them.

    Tom
  • Re:driver tweaking (Score:5, Insightful)

    by erikdotla ( 609033 ) on Tuesday May 27, 2003 @01:32PM (#6049459)
    I know my problems are in actuality MS/XP refresh rate problems. But it still inexcusable that it exists at all and I can't blame MS completely. It's too implausible, and reeks of a lack of cooperation between video card manufacturers, MS, and game developers.

    Video cards have a simple job when it comes to resolution and refresh rate: When using resolution X, use the best refresh rate Y, and if I have to tell it what that is, so be it. They can't do this.

    A lot of this is due to games and their poor detection of capabilities, and lack of effort to try for the best refresh rate. However, it's hard to pin it all on them. Games generally don't have trouble detecting if you have EAX capability, or detecting how many axis are on your joystick, or whether you have a third button on your mouse. Sure, I've seen problems in these areas too, but the video card situation feels like someone just invented VGA yesterday and the video card manufacturers are struggling to make it all work.

    I'm using the Cat 3.4 drivers. I can set the refresh separately too, but a few times both the ATI driver tabs and the XP display properties reported that I was in one refresh rate, but my monitor OSD said differently. Inexcusable. It was due to that "maximum capability" setting, and as a result it didn't mind lying to me as long as it avoided going over the maximum. Glad it was able to "protect" me.

    But that's not the half of it. I set the refresh rate, and when entering a game, it changes, usually back to 60hz. When entering games, the resolution changes a lot, and it seems completely random what it ends up on.

    Other problems I'm having include that mode switching in general takes three times as long as the NV card. Switching back to Windows from games results in a very long black screen, and until Cat 3.4 came along, I couldn't switch out of CS/HL at all without crashing the entire OS.

    Let me give you an example of my typical day. I set the display capability to 1280x1024 @ 85hz because I want 85hz in CS. In CS, I accidentally had the mode set for 1600x1200. With ATI3.3, it would crash. With ATI3.4, it would actually draw a 1600x1200 screen in a 1280x1024 window. Yep, it was cut off, with part of the screen literally extending off the monitor into the void.

    I change the capability to 1600x1200 @ 75hz and play a while. I quit and fire up BF1942, which due to CPU constraints, runs better at 1024x768. But, I'm at 75hz, because I have no way to tell the card that while it only supports 1600x1200 @ 75hz, it does 85hz in every other mode. I have to change the capability to 1280x1024 @ 85hz. BF1942 runs.

    I run another game at 1600x1200. Unlike CS, where it drew off the screen, this one would simply blackscreen as a result of trying to go into 1600x1200 @ 85hz (since 85hz is my "maximum" resolution.) I reboot, and the first few times it happened, I looked for game patches before realizing that this stupid ATI driver was the cause.

    The constant mode switches between games take several seconds, and perform an odd "screen wiping" effect that reeks of cheesy hardware. The NV switches modes smooth as butter.

    I'm scared as hell to hook this thing into my TV. It might try to pump 2048x1024 @ 100hz at it and cause an explosion.
  • Re:nVidia vs. ATI (Score:2, Insightful)

    by Anonymous Coward on Tuesday May 27, 2003 @01:46PM (#6049608)
    Neither is problematic unless it removes generality from the code-piece in question or reduces quality. There's a fine line: If the code substitution produces the exact same output for all possible inputs, not just the inputs which occur in the benchmark, then it's an optimization (which could possibly be done in the same way for other programs). If the code-substitution only works when certain parameters are limited by the benchmark, then it's a cheat. AFAIK NVidia's modifications do not remove generality but do reduce precision, which is still not completely illegitimate because ATI renders with 24bit precision compared to NVidia's 32bit precision without or 16bit precision with the modification.
    The clipping hack however is an obvious no-no and NVidia should simply admit it and shut the f*ck up.
  • by 0123456 ( 636235 ) on Tuesday May 27, 2003 @01:53PM (#6049697)
    "Three of four 3DMark03 demos don't use new DirectX9 shaders at all"

    No, but they use shaders which are generally only supported on DX9 cards and a few older ATI cards. Just because you have a PS2.0 card that doesn't mean you have to use PS2.0 if PS1.4 can do the same: why deliberately make more work for yourself by not supporting older cards?

    "Three of four 3DMark03 demos use Pixel Shader 1.4 which was introduced with DirectX8.1 and isn't natively supported by NVIDIA cards"

    Support for PS1.4 is a requirement of DX9, so if the GF FX is a DX9 card then it supports PS1.4, and your claim is therefore bogus. If it doesn't support PS1.4, then it's not a real DX9 card.
  • Re:nVidia vs. ATI (Score:4, Insightful)

    by tha_mink ( 518151 ) on Tuesday May 27, 2003 @02:00PM (#6049760)
    But the key point is...it's only a fucking benchmark. Who cares anyways. Just another reason to never trust benchmark programs. I don't care how well a card performs on a benchmark since I don't PLAY benchmarks.
  • by crivens ( 112213 ) on Tuesday May 27, 2003 @02:16PM (#6049907)
    Who really cares about benchmarks? Personally I don't (that was a rhetorical question). So what if one card can do 10 more mega-flip-flops per microsecond than another one. I don't really care about getting an extra 0.00001fps from Doom3.

    I don't believe claims anyway; ATI says their card is faster than NVidia's. NVidia says theirs is faster than ATI's. Bleh....
  • by default luser ( 529332 ) on Tuesday May 27, 2003 @02:17PM (#6049914) Journal
    ATI, suddenly finding themselves in a corner, made a very smart decision under pressure.

    Point is, they can come out of this wearing the white hat, because they were the first to be such good guys about the issue.

    The fact is, even with all Nvidia optimizations in-place, their high-end card will just barely edge out a 9800 Pro without optimizations. Add ot this the fact that ATI, 3dmark and the community will hound them and discount Nvidia's optimizations until they are removed, and you've got an all-out win for ATI.

    Remember folks: everyone cheats. Fools take things too far and get caught. ATI has played the fool before, Nvidia plays it now; that is the game.
  • Re:What a mess! (Score:1, Insightful)

    by Anonymous Coward on Tuesday May 27, 2003 @02:21PM (#6049952)
    the pixel shader isn't really at fault here. It's their high precision color calculations. FX chips do 16 or 32 bit color calculations while ATI does 16 or 24 bit. Therefore the ATI card gets a big leg up because their "true color" calculations are done at a lower bit, making them go a lot faster.
  • by xaoslaad ( 590527 ) on Tuesday May 27, 2003 @02:25PM (#6049994)
    We all know ATI did it in the past. We all know Nvidia is doing it now. And even better we know that ATI is up to its old tricks again as well.

    And how have people figured this out time and time again? Oh, they renamed the executable...

    Why does the benchmarking software not rename the executable to some-254-character-long-file-name-random-string.ex e? Use some kind of encryption to prevent the driver software from snooping in on the rename process and oooh no more cheating....

    I'm sure that there is some way that Nvidia and ATI could get around even this but what are they gonna do make a 75MB driver in retaliation to what the benchmark companies do?
  • by dnoyeb ( 547705 ) on Tuesday May 27, 2003 @03:23PM (#6050466) Homepage Journal
    Thats odd. ATI has always been known for the best TV-out in the industry. I would say their "performance" IS the direct reason whey they have won out in the OEM market for so long. Remember, OEMs barely care about 3D performance. And I am talking laptops with builtin graphics here, and desktops for big corporations. Any 3D work historically went to SGI or Sun anyway.
  • by GarfBond ( 565331 ) on Tuesday May 27, 2003 @03:32PM (#6050588)
    PS1.4 isn't natively supported by *most* nvidia cards. The spec for PS2.0 is such that it's all-encompassing. If you support PS2.0 you support PS 1.4 and PS 1.1. If you support PS1.4 you support PS 1.1, etc.

    So this is how it should look, properly:
    - Game 1: no shaders at all, only static T&L (DX7-class effects, given comparatively little weighting in overall score)
    - Game 2: vertex shader 1.1 and pixel shader 1.4 (natively supported by GFFX, ATI Radeon 8500 and above)
    - Game 3: vertex shader 1.1 and pixel shader 1.4 (natively supported by GFFX, ATI Radeon 8500 and above)
    - Game 4: vertex shader 2.0 and pixel shader 1.4+2.0 (DX9 cards only, Radeon 9x00 and GFFX)

    Nvidia's lack of support for PS1.4 is their own design choice, and now they have to live with it. The GF4 was released after DX8.1 came out, which contained the PS1.4 spec, but they chose not to support it. ATI Radeon 8500 and above have no problem with this because they supported DX8.1 from the getgo, but nvidia did not change and continued their 8.0 support. As was previously mentioned in the article, nvidia was participating in the developer's beta until Dec 2002, well into the development period for 3dm03 and a month after they paper launched the GFFX, so they knew what was going on with the benchmark for a long time beforehand and didn't change their stance for a while. Presumably, as a beta member up until Dec 2002 if they didn't like the choice of PS 1.4 in extensive use, then they could've said something earlier.

    The key to regarding 3dm03 is it's goal as a forward-looking benchmark. Both DX8 games and DX9 games are currently in development, and many DX7 games are still in existence (remember, HL2 doesn't require anything above a DX6 card), so in this respect 3DM03 is still fair in its test design.
  • Driver strategies (Score:5, Insightful)

    by John Carmack ( 101025 ) on Tuesday May 27, 2003 @04:31PM (#6051216)
    Rewriting shaders behind an application's back in a way that changes the output under non-controlled circumstances is absolutely, positively wrong and indefensible.

    Rewriting a shader so that it does exactly the same thing, but in a more efficient way, is generally acceptable compiler optimization, but there is a range of defensibility from completely generic instruction scheduling that helps almost everyone, to exact shader comparisons that only help one specific application. Full shader comparisons are morally grungy, but not deeply evil.

    The significant issue that clouds current ATI / Nvidia comparisons is fragment shader precision. Nvidia can work at 12 bit integer, 16 bit float, and 32 bit float. ATI works only at 24 bit float. There isn't actually a mode where they can be exactly compared. DX9 and ARB_fragment_program assume 32 bit float operation, and ATI just converts everything to 24 bit. For just about any given set of operations, the Nvidia card operating at 16 bit float will be faster than the ATI, while the Nvidia operating at 32 bit float will be slower. When DOOM runs the NV30 specific fragment shader, it is faster than the ATI, while if they both run the ARB2 shader, the ATI is faster.

    When the output goes to a normal 32 bit framebuffer, as all current tests do, it is possible for Nvidia to analyze data flow from textures, constants, and attributes, and change many 32 bit operations to 16 or even 12 bit operations with absolutely no loss of quality or functionality. This is completely acceptable, and will benefit all applications, but will almost certainly induce hard to find bugs in the shader compiler. You can really go overboard with this -- if you wanted every last possible precision savings, you would need to examine texture dimensions and track vertex buffer data ranges for each shader binding. That would be a really poor architectural decision, but benchmark pressure pushes vendors to such lengths if they avoid outright cheating. If really aggressive compiler optimizations are implemented, I hope they include a hint or pragma for "debug mode" that skips all the optimizations.

    John Carmack
  • by Anonymous Coward on Tuesday May 27, 2003 @05:42PM (#6051765)
    yes, it does work, really. It probably didn't work with the atiwrapper if you used that. but on fx cards at least, i can testify that it works. I didn't try this, because there was no need; but you can also remove the leaf files from the library as well as the wings. There is also a patch available, but that's all it does, as far as i know.
  • by Anonymous Coward on Tuesday May 27, 2003 @07:53PM (#6052793)
    Thanks for the comment. I really appreciate it when I see people from a paticular field, who know what they're talking about, commenting on a story. It helps elaborate what's really going on behind the story and I often end up learning just as much from the comments as I do from the story.

    It sort of irks me when I see people replying to your post with silly or snide remarks. Don't let them keep you from posting. For every 1 of them, there's a 100 more slashdot readers who appreciate reading your comments.

    Sincrely,
    AC
  • by CoreyGH ( 246060 ) on Tuesday May 27, 2003 @10:06PM (#6053529) Homepage
    ARB may require 24 bits as a minimum but that has nothing to do with assumed shader precision. What he's getting at Nvidia can't do 24 bit, only better or worse than 24 bit.
  • by gotan ( 60103 ) on Wednesday May 28, 2003 @01:28AM (#6054513) Homepage
    I found it interesting that ATI claimed they simply "optimized" the code by simply shuffling around a few instructions while still getting the same results. That may even be so, but obviously they made this optimisation only for the very specific case of one of 3dmarks benchmarks, else it would have worked as well with the slightly altered drivers as well. Will their engineers look at all and any game out there and optimize that code too or will they come over to my house and write a new driver on the fly when i need one? No? Well then, since i don't benefit from their optimisation in any real-world szenario and it only serves to boost their score a little. In the real world their graphics card will have to deal with suboptimal drivers as well, if they want to improve the situation they should give out a few guidelines to game-developers how to write a fast engine.

    This isn't about Nvidia vs. ATI or about defending Nvidia, what NV did by clipping planes was even worse. It's just that there is no justification for cheating on the benchmarks, even if the graphical results are the same. The benchmarks should be an indicator how the card will perform in real-world-szenarios (i.e. games) and any driver-tweaks that are benchmark-specific but don't help performance otherwise are just cheating and make-believe.
  • by BCGlorfindel ( 256775 ) <klassenkNO@SPAMbrandonu.ca> on Wednesday May 28, 2003 @12:34PM (#6058133) Journal
    JC is off on this. The specification document for "ARB_fragment_program" does not define a specific precision

    Not to nitpick but the ARB specs do specify a minimum precision for ARB_fragment_program. From the Latest GL documentation:
    RESOLVED: We've decided not to include precision queries.
    Implementations are expected to meet or exceed the precision guidelines set forth in the core GL spec, section 2.1.1, p. 6, as ammended by this extension.
    To summarize section 2.1.1, the maximum representable magnitude of colors must be at least 2^10, while the maximum representable magnitude of other floating-point values must be at least 2^32. The individual results of floating-point perations must be accurate to about 1 part in 10^5.


    I'll leave the tallying of what fp precision that comes out to for those more anal than myself. Or maybe just those looking for a chance to correct JC on a rather obscure technical detail ;).

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...