Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Bug Graphics Software

Scaling Algorithm Bug In Gimp, Photoshop, Others 368

Wescotte writes "There is an important error in most photography scaling algorithms. All software tested has the problem: The Gimp, Adobe Photoshop, CinePaint, Nip2, ImageMagick, GQview, Eye of Gnome, Paint, and Krita. The problem exists across three different operating systems: Linux, Mac OS X, and Windows. (These exceptions have subsequently been reported — this software does not suffer from the problem: the Netpbm toolkit for graphic manipulations, the developing GEGL toolkit, 32-bit encoded images in Photoshop CS3, the latest version of Image Analyzer, the image exporters in Aperture 1.5.6, the latest version of Rendera, Adobe Lightroom 1.4.1, Pixelmator for Mac OS X, Paint Shop Pro X2, and the Preview app in Mac OS X starting from version 10.6.) Photographs scaled with the affected software are degraded, because of incorrect algorithmic accounting for monitor gamma. The degradation is often faint, but probably most pictures contain at least an array where the degradation is clearly visible. I believe this has happened since the first versions of these programs, maybe 20 years ago."
This discussion has been archived. No new comments can be posted.

Scaling Algorithm Bug In Gimp, Photoshop, Others

Comments Filter:
  • Re:Monitor gamma? (Score:3, Interesting)

    by Trepidity ( 597 ) <[gro.hsikcah] [ta] [todhsals-muiriled]> on Tuesday February 23, 2010 @10:25PM (#31254474)

    In some cases at least it seems like the primary purpose of scaling is to display the images immediately, in which case it seems like the gamma should be accounted for. For example, when browsers rescale images, it's an unexpected result if they change the perceived brightness while doing so--- shouldn't the browser's scaling be done in the same brightness space as the one it intends to use to display the images?

  • Re:Monitor gamma? (Score:4, Interesting)

    by poetmatt ( 793785 ) on Tuesday February 23, 2010 @10:33PM (#31254560) Journal

    The responses that they post are also inaccurate it seems.

    From

    meanwhile, I see a grey rectangle in firefox, and I still don't get what that signifies.

  • by drewm1980 ( 902779 ) on Tuesday February 23, 2010 @10:57PM (#31254764)

    Gamma is often poorly understood even by people doing scientific and engineering work using images.

    Does your algorithm depend (explicitly or implicitly) on the light intensity -> pixel data mapping?

    If NO: You're probably wrong. Go read about gamma. Just because the picture looks right to you, doesn't mean it looks right to your code.

    If YES:

    Do you have the luxury of taking the pictures yourself?

    If NO: You're stuffed. Pretty much all images on the internet and most public research databases have unknown/unreliable gamma curves.

    If YES:

    1. Spend a lot of time calibrating your camera yourself. This is only cheap if your time is worthless

    or

    2. Buy a machine vision camera. A $600 machine vision camera will have the specs of a $50 consumer camera, but at least you will know the gamma curve.

    or

    3. Ignore the gamma issue, cross your fingers, hope it's not hurting your performance, and publish your results knowing that you're in good company and nobody will call you out.

  • by Animaether ( 411575 ) on Tuesday February 23, 2010 @11:13PM (#31254906) Journal

    If you're a *serious* amateur photographer, then you should already know about this and not be using those apps / using them in the color modes (to use Photoshop parlance, as I guess most serious amateur photographers will have a copy (legit or otherwise of that)).

    I guess the argument would hinge on who is a serious amateur photographer and who is just a regular amateur photographer.

    As for the actual examples - sure, you can see the difference.. especially since they're in before/after -style swappable pages. If I presented you a random image off of a random image gallery online, though, would you be able to tell the difference?

    If I showed you an online photo album and pointed at an image's thumbnail, had you click the thumbnail, and open up the full size image.. would you notice that it was scaled to thumbnail incorrectly?

    "Nobody really cares" may have been too broad a statement - but those who really care, already know.. or reasonably should know.
    imho.

    Note that I'm not excusing the software programs from handling this better - certainly not Photoshop - but it's 1. not a new revelation and 2. certainly not a "scaling algorithm bug".

  • Old news (Score:5, Interesting)

    by spitzak ( 4019 ) on Tuesday February 23, 2010 @11:33PM (#31255056) Homepage

    My software has been calculating in linear space for over a decade now (this is the Nuke Compositor currenlty produced by The Foundry but at the time it was used by Digital Domain for Titanic). You can see some pages I wrote on the effect here: http://mysite.verizon.net/~spitzak/conversion/composite.html [verizon.net]. See here for the overall paper: http://mysite.verizon.net/~spitzak/conversion/index.html [verizon.net] and a Siggraph paper on the conversion of such images here: http://mysite.verizon.net/~spitzak/conversion/sketches_0265.pdf [verizon.net], in fact a lot more work went into figuring out how to get such linear images to show on the screen on hardware of that era than on the obvious need to do the math in linear. Initial work on this was done for Apollo 13 as the problems with gamma were quite obvious when scaling images of small bright objects against the black of space.

    For typical photographs the effect is not very visible in scaling, as the gamma curve is very close to a straight line for two close points and thus the result is not very much different. Only widely separated points (ie very high contrast images with sharp edges) will show a visible difference. This probably means you are trying to scale line art, there are screenshots in the html pages showing the results of this. Far worse errors can be found in lighting calculations and in filtering operations such as blur. At the time even the most expensive professional 3D renderers were doing lighting completely wrong, but things have gotten better now that they can use floating point intermediate images.

    One big annoyance is that you better do the math in floating point. Even 16 bits is insufficient for linear light levels as the black points will be too far apart and visible (the space is wasted on many many more white levels than you ever would need). A logarithmic system is needed, and on modern hardware you might as well use IEEE floating point, or the ILM "half" standard for 16-bit floating point.

  • Re:Nitpicking (Score:3, Interesting)

    by im_thatoneguy ( 819432 ) on Wednesday February 24, 2010 @01:04AM (#31255684)

    It's not at all complicated. Many applications do properly handle it. Nuke, Shake and other compositing apps have no problem.

    Pixel^(GAMMA) -> Scale -> Pixel^(1/GAMMA) I wouldn't call that a terribly complicated process.

    Or even better. On open convert it to linear. Then on save convert it back. Maybe then Photoshop and company would actually handle alpha channels correctly *grumble* *grumble*...

  • Re:HA! (Score:2, Interesting)

    by Anonymous Coward on Wednesday February 24, 2010 @02:11AM (#31256088)

    > When you're used to Windows, Mac text looks blurry

    If you're programming in OS X and antialiased text annoys you, simply turn it off. Many editors support disabling it. Then you use a font, such as Monaco, that has been hand-tweaked at the right sizes for screen display and everything looks great, no blurriness.

  • by Anonymous Coward on Wednesday February 24, 2010 @03:55AM (#31256622)

    Eric Brasseur, in his otherwise excellent web page on the subject writes:

    Technically speaking, the problem is that "the computations are performed as if the scale
    of brightnesses was linear while in fact it is an exponential scale." In mathematical terms:
    "a gamma of 1.0 is assumed while it is 2.2." Lots of filters, plug-ins and scripts probably
    make the same error."

    That is sort of right in a sense, but the assumption that it is a problem with the scaling algorithm and worse, that it should be fixed in the algorithm are both wrong and a dangerous way to start thinking about this situation. In fact, this isn't really a bug anymore than the misuse of any tool is a bug. What is really needed is more knowledge and expertise on how to create and utilize a color managed imaging workflow.

    Almost all image processing algorithms contain simple addition somewhere and for simple addition to wok as you might expect, the image encoding function must be linear. (Adding logarithms for instance actually results in a multiplication operation. That is how slide rules work. What's a slide rule? Hmmm...) Scaling involves filtering and nearly all filtering algorithms involve addition, but so does a blur, over, blend, key, mask, matte, color correction, dithering and quite a few others. This problem isn't limited to scaling operations and it isn't a actually a bug at all, but rather an education and understanding issue.

    The right thing to do is to linearize your images prior to doing any kind of manipulation and keep them linear throughout the whole image manipulation process, only transforming them to something else for final delivery (or approval if that is an issue). By linear, I mean that the relationship between the luminance encoding and light is linear. You see, it isn't the image or the file that is linear or non-linear, it is that transfer function between code values and light. When I say linear, I mean that for a function f, f(a) + f(b) = f(a+b) and a*f(b) = f(a*b). Anything else is non-linear.

    The problem with working with linear images is that unless you are careful and know what you are doing, your monitor being a non-linear viewing device, won't display the images correctly, so if you linearize your images, they won't look "right" on the monitor, although they will be correct mathematically. The solution to that is to employ LUT in the viewer such that linear data is displayed correctly. Photoshop can do this and has been able to do it for years. The fact that people don't understand how to use that feature and why doesn't mean that there is a bug in the scaling algorithm.

    By the way, high end 2D image manipulation tools like Shake and Nuke to name two are used in the most demanding imaging application possible, motion picture visual effects. With regard to internal image processing, they presume that data is linear. They rely on the expert knowledge of the user to insure that the proper image encoding, that being linear to light is used.

    Lastly, I think thinking of this as a bug in scaling algorithms and trying to fix it by revising scaling algorithms rather than recognizing that it is really a problem with image data being encoded non-linearly is a completely wrong headed approach. It only addresses one use case, that being scaling and ignores all of the other problems with non-linearity. A better approach would be to educate people as to what is actually happening, why it is happening and then just teach them to use the tools they have.

    Just presuming that the input image has a 2.2 gamma and correcting for it in just the case of scaling but not in other 2D image manipulations only just serves to muddle the issue.

    Ultimately, you want is a display subsystem (display card and monitor) that has been profiled and then corrected to match as closely as possible some idealized target like sRGB or Rec. 709 just to name two. The easiest way to do that i

  • by Anonymous Coward on Wednesday February 24, 2010 @05:01AM (#31256954)

    1. Spend a lot of time calibrating your camera yourself. This is only cheap if your time is worthless

    Typically this should be done during the initial set-up of any new camera that a photographer purchases. A Gretag Macbeth colour checker is cheap, the required shot to evaluate the performance of the sensor is quick and easy to set-up and the processing of this test image is fast with the right tools (like this script for photoshop [rags-int-inc.com]). It should take under an hour to do it right and get it as part of the automatic stage of processing your RAW files (basically setting ACR/Lightroom's demosaicing stage), but the benefit is that every picture taken from then onwards does not need extra calibration. Thus your prints look like your shots, assuming the rest of your workflow is equally as calibrated.

    While your time is valuable, if you do not calibrate like this, you're wasting time further down the line for EACH image, and thus it's more expensive to not do it...

  • by jcupitt65 ( 68879 ) on Wednesday February 24, 2010 @05:20AM (#31257042)

    Yes, it depends on the colourspace you use for the resize. If you resize in a non-linear colourspace, you will get (usually) tiny errors.

    I'm the author of one of the programs listed as defective (nip2). If you care about this issue, all you need to do is work in a linear space, such as XYZ (Click Colour / Colourspace / XYZ).

  • by Anonymous Coward on Wednesday February 24, 2010 @06:04AM (#31257258)
    a low pass filter wouldn't work per se, as the color encoding used is non linear. most filters instead takes each channel of the picture producing an area average akin to a lowpass filter, but this is exactly where the problem lies as a: checkered black and white image (that is, every odd pixel on one row is white and the others black, reversed on the next row) is averaged to a contiguous half tone gray image, which is not the expected result because the gray scale is not linear in terms of luminance, so you'll need to have it averaged to a rgb(180,180,180) instead that to a rgb(128,128,128)

    *180 is not the actual value
  • Re:HA! (Score:4, Interesting)

    by David Jao ( 2759 ) <djao@dominia.org> on Wednesday February 24, 2010 @08:12AM (#31257904) Homepage

    Talk about changing the goalposts! This whole Slashdot story is about fidelity.

    Woah there. First of all, as a neutral party, I dislike both Apple and Microsoft (I use Linux), and I have no agenda here. However, I do think that your viewpoint (as a self-admitted print publisher) is highly biased and useless to the majority of slashdot readers. This slashdot story is about image editing, but this thread is about on-screen font rendering. The two topics are different, and they need different goalposts.

    The technical ins and outs of photo editing and display are all about fidelity; why would that not be the case with typeface rendering and display?

    Viewing a photo is very very different from reading text. In fact, I find it completely preposterous that anyone would regard the two things as having anything in common.

    When I am reading text, I care about text fidelity, not image fidelity. I want to know what the letters and words are. I want to be able to read and re-read the text with minimal eyestrain. Unless you work in advertising or marketing (two industries which I loathe), the purpose of displaying or printing text is to accurately convey which letter is which and which word is which. Anything else (such as font accuracy) is only a means to this end. Anyone (such as you) who thinks font accuracy is worthy as an end in itself is clearly living in a different world from someone (such as me) who reads scientific papers (the vast majority of them on-screen, simply because carrying around 1GB worth of printed papers is impossible) and writes code for a living.

    I'm no great lover of Apple products, but as someone in print and publishing, this is one thing Apple does right and everyone else refuses to.

    There is a huge difference between printed text and text on a computer screen. The physical differences between the two are so vast, they can never be made to display exactly the same. I understand that, for someone like you who works in the publishing industry, it is important for computer screens to maintain fidelity to printed paper, since the printed paper is your end result. However, just because your profession depends crucially on font fidelity, does not mean that the same holds for other people.

    You need to understand that text on a computer screen is not always intended for publication on a printed page. Computer programming, in particular, involves writing large amounts of textual code, code which is almost never printed. Many slashdot readers are computer programmers, and couldn't give a flying f*ck about printing out their source code on paper. In such a situation, it is counterproductive to insist on image-level fidelity between text displayed on a computer monitor and text printed on a paper page. In fact, the optimal display strategy for computer programmers is almost completely the opposite of what you are saying, namely: render the font on the computer monitor so that it is as readable as possible on a computer screen, with absolutely zero regard for fidelity to printed output.

    Scenes intentionally filmed with judder that people go to great lengths to smooth out; proper calibration that is blown away by dynamic settings; sound mix and dynamic range that is hopelessly trampled by bass-loading equalizer fiends; photos being displayed in some other color space are all the same issue.

    Ironically, text rendering is the one issue that is different from all of the above. Text is a discrete medium: a letter is either an a, b, or c, or so on. Photos, sound, and video are analog media, perceived along a continuum by the human brain. Text fidelity means only one thing: being able to tell letters apart.

    If you were to print out Microsoft's rendered fonts in a novel, people would demand a reprint by the publisher. The objective reference is the superior reference.

  • by Xabraxas ( 654195 ) on Wednesday February 24, 2010 @11:01AM (#31259342)
    I noticed this bug the other day but I thought perhaps I made a mistake somewhere. I am creating a Drupal site for photos and it has a dark background. I was just testing out the image upload and I used an unscaled image. Later I scaled the same image down to save space and re-uploaded the image. The brightness was noticeably different. It's actually very hard to tell in a lot of cases, especially with a brighter background. A dark background really makes the bug apparent.
  • by Anonymous Coward on Wednesday February 24, 2010 @09:36PM (#31267448)

    Why should the scaling algorithm take into account the display's gamma? That should be the function of the display hardware and/or driver to accurately depict the luminosity of 0-255 (or 0-4095 for 12 bit displays), not the software. If the imagery is to be used for scientific uses, one does not want the resampling to tweak the values as suggested in the article. In his example of a 50% reduction of a 4x4 square, the arithmetic mean of 127 is the correct value if the reduced image is going to be fed into a process were 0-255 do not represent display brightness values, but some other values entirely. That way the user should be able to display the image on different devices, with different gamma values and not have mess around with the original data.

The hardest part of climbing the ladder of success is getting through the crowd at the bottom.

Working...