Forgot your password?
typodupeerror
Graphics Software

Improving Digital Photography 401

Posted by michael
from the let-there-be-light-and-high-resolution-graphics dept.
Milican writes "'It's easy to have a complicated idea," Carver Mead used to tell his students at Caltech. "It's very, very hard to have a simple idea...And now one of Mead's simplest ideas--a digital camera should see color the way the human eye does--is poised to change everything about photography. Its first embodiment is a sensor - called the X3 - that produces images as good as or better than what can be achieved with film.'" We had a previous story about Foveon last February.
This discussion has been archived. No new comments can be posted.

Improving Digital Photography

Comments Filter:
  • Pixel Noise (Score:3, Informative)

    by DrinkDr.Pepper (620053) on Wednesday January 08, 2003 @02:10PM (#5041141)
    How is this at all like the way the human eye sees?

    I hate pixel noise in my digital pictures. I have heard that since red color has to be detected at the deepest part of the silicon there is an abudance of noise in the reds.
    • Re:Pixel Noise (Score:2, Insightful)

      by forand (530402)
      I think the point is that we don't have three detectors in our eyes to see base colors and then construct the true color.
      • Re:Pixel Noise (Score:3, Insightful)

        by tgibbs (83782)
        I think the point is that we don't have three detectors in our eyes to see base colors and then construct the true color
        Except...we do. (Well, 4 detectors if you count the rods). But our brains are probably smarter in using the "sidebands" of the three color detectors to help construct the true colors.
    • by SuperKendall (25149) on Wednesday January 08, 2003 @02:20PM (#5041243)
      It sees a real "color" instead of on red/green/blue (dispersed in fine pixels of course). It may not be able to see red quite as well as other colors, but it only means that the sensitivity at the red level is the limitation you have for the picture as whole.

      What you don't get is Moire patterns - at all!! That is what you probably hate when you say you hate "pixel noise" because it's totally obvious (due to the color changes), very distracting, and annoying to clean up after.
      • by tjwhaynes (114792) on Wednesday January 08, 2003 @02:31PM (#5041325)

        It sees a real "color" instead of on red/green/blue (dispersed in fine pixels of course). It may not be able to see red quite as well as other colors, but it only means that the sensitivity at the red level is the limitation you have for the picture as whole.

        I don't think I agree - it still looks like a standard red/green/blue pickup (and that is exactly like the human eye - we don't have different cones for, say, lime green and grass green). There is possible mileage in having more layers picking up wavebands spanning a smaller range of wavelengths (and there are humans with 4 types of cone rather than 3 - tetrachromatic vision) but it's not going to matter too much for our normal vision. Useful for simple spectroscopy (colour profiles etc.) though.

        What you don't get is Moire patterns - at all!! That is what you probably hate when you say you hate "pixel noise" because it's totally obvious (due to the color changes), very distracting, and annoying to clean up after

        It's pixelated still so you will still get Moire patterns as soon as the smallest details are finer than the resolving power of the X3 bins (think Nyquists theorem). However, the bizarre colours you get from a fine-grained black and white grid shouldn't be present to the same extent as all the measurements of colour intensity are done at the same point in the X3 layer, as opposed to the different spatial positions of the red green and blue bins in a colour CCD.

        Cheers,

        Toby Haynes

        • Nyquist free... (Score:5, Informative)

          by SuperKendall (25149) on Wednesday January 08, 2003 @02:39PM (#5041396)
          t's pixelated still so you will still get Moire patterns as soon as the smallest details are finer than the resolving power of the X3 bins (think Nyquists theorem). However, the bizarre colours you get from a fine-grained black and white grid shouldn't be present to the same extent as all the measurements of colour intensity are done at the same point in the X3 layer, as opposed to the different spatial positions of the red green and blue bins in a colour CCD.

          The bizzare colors (what I really hate about digital photos) are not just reduced - they are gone. If you read the review at DPReview.com you'll find that it has resolution right up to Nyquist is noise free and you get some detail beyond. Here's the relevant section (near the very end of the review, where they test against some resolution charts):

          The SD9 is capable of delivering all nine individual lines of the horizontal or vertical resolution bars up to its maximum absolute resolution (sensor vertical pixel count) and slightly beyond. Note also that because the X3 sensor doesn't need a color filter array it doesn't suffer from color moiré.. Absolute resolution is just less than the Canon EOS-D60, Nikon D100 and Fujifilm S2 Pro (at 6 mp).

          However, because the X3 sensor doesn't use a low pass (anti-alias) filter it is able to resolve detail all the way up to Nyquist. Beyond Nyquist the system will alias without any objectionable color moiré. Where a Bayer sensor camera would turn detail beyond Nyquist (such as distant grass texture) into a single plane of blurred color the SD9 will continue to reproduce some individual pixel detail (without color moiré).
    • Re:Pixel Noise (Score:5, Informative)

      by tjwhaynes (114792) on Wednesday January 08, 2003 @02:24PM (#5041272)

      How is this at all like the way the human eye sees?

      This foveon system is like the human eye inasmuchas the light photons penetrate multiple layers and register at more than one levels in the same spot. For example, take a look at this cross section of the human retina [eyedesignbook.com].

      Current CCDs only collect one waveband of light at one area. To simulate colour, they collect three different wavebands in adjacent areas on the surface of the CCD. Hence the funky moire patterns you that you see in tightly patterned cloth on the sample piccies on the site.

      I hate pixel noise in my digital pictures. I have heard that since red color has to be detected at the deepest part of the silicon there is an abudance of noise in the reds.

      If the upper layers are completely transparent in the red, then your concerns don't apply. As long as the actual transparency of the upper layers is reasonable, then there is little cause to worry - traditionally CCDs are far more sensitive to the red end of the spectrum than the blue so even modest photon loss at the red end is unlikely to seriously degrade the pictures.

      The other nice thing about this technology is that the spatial size of the light bins is approximately three times larger than that for the equivalent physical sized CCD - that means better signal-to-noise ratios for this new technology.

      Anyway, the presentations look compelling. I await cameras with reasonable numbers of megapixels (say 4Mpixels +) and reviews...

      Cheers,

      Toby Haynes

      • Re:Pixel Noise (Score:5, Insightful)

        by tgibbs (83782) on Wednesday January 08, 2003 @02:54PM (#5041528)
        This foveon system is like the human eye inasmuchas the light photons penetrate multiple layers and register at more than one levels in the same spot.
        Uh, no. Only one of those layers actually registers light--the others are just "wiring" (yes the mammalian eye actually runs its connections in front of its light sensors). Actually, it is less like the way the eye works. That doesn't mean that it isn't better, however. The notion that a camera should work like an eye is fundamentally misguided--would you wanted a camera that only captured color and high resolution at the very center of the image, and was low resolution black & white every where else?
    • Re:Pixel Noise (Score:5, Informative)

      by mohaine (62567) on Wednesday January 08, 2003 @02:34PM (#5041353) Homepage
      Current color CCDs only measure one of the primary colors at each pixel. Once a picture is taken, the missing colors are 'guessed' by looking at the surrounding pixels that did capture that color. This process is really slow because each pixel is missing 2 colors.

      The X3 actaully measures RGB at each pixel, giving much better quality, at a higher speed.

    • Re:Pixel Noise (Score:5, Interesting)

      by rendermouse (462757) on Wednesday January 08, 2003 @02:36PM (#5041374) Journal
      Read a bit about The Color-Sensitive Cones [gsu.edu]

      "In 1965 came experimental confirmation of a long expected result - there are three types of color-sensitive cones in the retina of the human eye, corresponding roughly to red, green, and blue sensitive detectors. "
      • Re:Pixel Noise (Score:5, Interesting)

        by Arthur Dent (76567) on Wednesday January 08, 2003 @03:22PM (#5041783)
        Actually, there are four. [216.239.37.100]

        Another possible effect of having two X-chromosomes is that a woman who is a carrier for colour-blindness might have one X-chromosome with red and green and one with green and a different green. Her son, who has only the two green pigments, is colour blind. But the woman herself may have cone cells for blue, red, green and the extra green. Instead of having the usual three dimensions of colour she might have four. She would be a tetrachromat.
  • by intermodal (534361) on Wednesday January 08, 2003 @02:10PM (#5041143) Homepage Journal
    wait till a few years down the road once he's up to X10!
    • Whats the hold up? (Score:3, Interesting)

      by cosmosis (221542)
      The X3 announcement came out almost a year ago, and still their is only one, ONE camera that has this technology. If its so superior (which is it by the way!) then why the hell hasn't this thing been flooding the market? It defies description.

      In fact, earlier this year the announcment was that we should see several cameras with X3 technology on the store shelves in time for Christmas. What happened?

      Planet P Blog [planetp.cc] - Liberty with Technology.
      • by StarFace (13336) on Wednesday January 08, 2003 @03:48PM (#5042046) Homepage
        Primarily because it is still a bit buggy and bleeding edge. CCD is a proven technology, with a lot of time put in to its development. That is why Nikon has stuck with CCD chips. Canon has been using Bayer CMOS chips in some of their prosumer cameras, but the top of the line 1Ds still uses a CCD chip.

        X3 still displays some odd behaviors under certain conditions, and until these problems are resolved, the "big guys" aren't going to want to put it into a high end camera -- especially when their customers have grown to expect a certain level of all-around quality and attention to detail from them.
  • by Drakonian (518722) on Wednesday January 08, 2003 @02:10PM (#5041150) Homepage
    in Photography. Check out the article here [popsci.com].
  • That's all great and all... but until there's affordable printing solutions that can print better than film, there won't be as widespread adoption.
    • I use Shutterfly.com to print and share my digital photos. 4x6 prints are less than $0.50 each. Here's a link [shutterfly.com] to some crummy photos I've taken with my new Canon Power Shot S200.

    • Re:digital print... (Score:4, Interesting)

      by angle_slam (623817) on Wednesday January 08, 2003 @02:33PM (#5041350)
      That's all great and all... but until there's affordable printing solutions that can print better than film, there won't be as widespread adoption.

      The minilab system that is widely regarded as the best is the Fuji Frontier system [fujifilm.com]. How does it work? By scanning film. Of course, it accepts files from digital cameras as well.

      What is the best way to get large, "professional" prints? The Lightjet [cymbolic.com]. How do these operate? Using very high quality scans! (See West Coast Imaging [westcoastimaging.com], for example). My point? You can already get digital images produced in the exact same manner as the best film prints.

      There are already a lot of people who think digital photography has surpassed even medium format photography. See the Luminous Landscape [luminous-landscape.com], for example.

      As for widespread adoption, photojournalists have all but abandoned film. The P&S crowd is already beginning to abandon film.

    • Or getting over the mental block and old way of doing things by having to hold the picture in your hand to get enjoyment out of it. For 99.9% of pictures that the average person takes and gets developed, viewing them on screen or even printing them on a decent inkjet is more then sufficient. The professional or the experienced novice trying to get the perfect shot will always have to pay a premium as their limited numbers will never amount to "wide spread" adoption.
  • Review of X3 Camera (Score:2, Informative)

    by SparkyTWP (556246)
    For those of you interested in a review of a X3 camera and a simple explanation of the technology behind it, this review [dpreview.com] is pretty decent.
  • Sadly... (Score:2, Insightful)

    by Fideaux! (44069)
    ..Mead picked probably the crappiest camera company possible to produced his first camera. Sigma is known for making low-cost, and relatively low-quailty aftermarket lenses for the big camera manufacturers. Others will argue with the same info that they were given by the camera salesperson who makes a comparitively huge commission on the Sigma (or ProMaster, or one of of Sigma's off brands), but trust me, they suck. (They might also say that Sigma builds lenses for the big camera manufacturers, also false) They've made a few cameras that have been embarassing flops.

    I've talked to a few people who have used the Foveon Sigma and while they rave about the technology, the can't stand the camera for handling, feature set, etc.

    What Mead needs to do is play whatever game Canon/Nikon/Minolta/Olympus wants him to play to get his chip in their cameras. Then it'll really take off.

    • The feature set is supposed to be pretty good, according to DPReview. The only real complaints they had about the camera were the red noise, and poor behavior in low light conditions. The camera had some really nice features including "undo last delete", histograms for each of the coolor channels, and even the ability to zoom in on an area of the pictures while examining the histogram to get a histogram for a small region of your photo. The software that comes with the camera is also supposed to be very good (though I have no idea if it works in OSX yet).

      Over at Photo.net [photo.net] people seem to like some of the Sigma lenses pretty well. The 70-200 I think, is supposed to be a fine lens and people use it on other bodies all the time.

      I agree I would have liked to see a Nikon or Canon body with this chip, but given that's probably a year or two off I'm probably going to buy the SD9 as my first digital SLR.
  • by MarcoAtWork (28889) on Wednesday January 08, 2003 @02:14PM (#5041177)
    for an excellent (as usual) review of a camera based on this sensor check dpreview

    http://www.dpreview.com/reviews/sigmasd9/ [dpreview.com]
    • While an excellent review, if you're just looking for a good sense of what the sensor can do without reading all twenty four pages (and completely slashdotting dpreview), check out this page [dpreview.com].

      The net-net of the review is that it's a great sensor, very accurate, the camera as some first-generation issues, and, of particular interest to this audience, uses a proprietary x3f raw image format that must be converted with Mac or Windows software.
  • by L. VeGas (580015) on Wednesday January 08, 2003 @02:14PM (#5041185) Homepage Journal
    "It's very, very hard to have a simple idea."

    I don't know about anyone else, but this GW Bush bashing is getting a little tiresome.
  • by aussersterne (212916) on Wednesday January 08, 2003 @02:15PM (#5041188) Homepage
    Before all of the replies saying that digital is for geeks and film will forever rule, please be sure that you have used current and professional quality digital gear, including 35mm gear made by Canon or Nikon with standard lens mounts, digital medium or digital large format backs (depending on the type of vs. film comparison you plan to make).

    Consumer digital cameras are one thing... X3 is another (still hotly debated)... but most photo editors and labs out there right out agree that a Canon EOS-1D, EOS-D60, a Fuji S2 or a Nikon D1X or D100 is simply takes better pictures in nearly every regard (including resolution) than a 35mm film camera, with any brand or grade of film. With the latest range of full-frame cameras such as Canon's EOS-1Ds (11 megapixel, I believe) and Kodak's 14 megapixel offering, the distance between digital and film (with digital on top) will only increase.

    And before you comment on other film sizes, realize also that many of the largest advertising companies shooting commercial spreads abandoned film long ago and are shooting with digital medium format or large format backs. Yes, many of the fashion or product spreads you see in your favorite checkout stand magazine are in fact digital these days.

    Film is well on its way to becoming a playing for history hobbyists and an art tool for retro artists, and no amount of "ludditing" will change this.
    • Film still rules for taking pictures in low-light. Digital cameras just can't handle low-light situations, by their very nature.

      Plus, the speed of film is better. Digital cameras aren't very good for action photography.

      So, uh, yeah. Digital is great for posed shots in good lighting. So I guess it is the best. Whatever.
      • by aussersterne (212916) on Wednesday January 08, 2003 @02:31PM (#5041328) Homepage
        Film still rules for taking pictures in low-light. Digital cameras just can't handle low-light situations, by their very nature.

        Plus, the speed of film is better. Digital cameras aren't very good for action photography.

        So, uh, yeah. Digital is great for posed shots in good lighting. So I guess it is the best. Whatever.


        Remember, I said "please be sure you have used the gear".

        The ISO 1600 and 3200 shots from the pro digitals are easily less grainy and have better dynamic range than their film counterparts. Try it. And my EOS-1D can do 1/16,000 shutter speed with zero lag. Is that fast enough for you?

        Yet another person who is bashing without trying.
        • Ha!

          Yes, and don't forget the other end of the spectrum too, that these cameras can take wonderful long exposures as well. The D60 in particular can sit on Bulb for minute after minute without any major noise or pixel errors. Taking ten minute bulb exposures seems fairly "low-light situation" to me. I've had comparable results with the D100 has well. I also regularly take 10 to 15 second exposures with it, and never once have I had to contend with excess noise, boomy shadows, or any other difficulties.

          Me thinks these people are playing with their friend's Kodak DC3400 or something.
      • Really? How low are the light levels you're talking about? I took some wicked night time pictures recently with my brother using his Canon G20 set to up to 15 seconds of exposure. Pretty good for a consumer camera, plus way better than anything we could do as amateurs using 35mm due to the immediate feedback.
      • Do you mean long exposures, or low contrast? For long exposures, this four minute exposure [dpreview.com] disagrees with you. In the article the guy says he couldn't even see that terrace it was so dark.

        What do you think it is about low light situations that precludes digital cameras from working well?

        As for speed.. yeah, my digital camera only goes up to ISO 1000. But you don't have to go to 1000 to take normal non-posed shots successfully (There's a lot of space between posed shots and extremely fast moving action shots.)

        You forgot to add that you can't use UV or IR film in digital cameras. :D
      • Umm...you do realize you repeated yourself, right? High speed=short shutter time=low light.

        Digital cameras aren't very good for action photography.
        Right...and there's a world-wide market for maybe five computers (true when it was said) and 640K is enough for anyone (true when it was said). Methinks you missed the point of the article (you did read the article, didn't you?) There is a new technology now available that is about an order of magnitude better than CCDs. So I suppose that what you say is true...for now.
      • Hubble? (Score:3, Informative)

        by SteveM (11242)

        Film still rules for taking pictures in low-light.

        So that's why the shuttle keeps visiting the Hubble Space Telescope, to pick up the film!

        The is also a company called SBIG [sbig.com] that makes a line of digital imagers for amatuer astronomers.

        Steve M

        • Re:Hubble? (Score:5, Insightful)

          by afidel (530433) on Wednesday January 08, 2003 @02:56PM (#5041539)
          High quality extremely expensive digital imaging devices are extremely good at capturing low amounts of light, but for consumer cameras the noise level in the electronics is too high so low light captures get faded out by the natural noise in the signal. Most CCD's used for astonomy are cooled through some means, usuall liquid nitrogen to bring the noise level in the sensor down to small fractions of what they would be at room temperature. This also leads into one of the negative points of the foveon tech which is that its noise floor is about 3 times higher than the cmos tech that Canon is using in their cameras like the D-30 and D-60.
    • How many people own a $4-5k (or more) camera? The models you list are wonderful for professional photographers and studios, but don't slam the average user for not being able to afford pro gear. Current consumer devices take relatively good photos. Still not as good as a hobbyist with a midlevel analog camera can do.

      Most importantly, not many consumer level output devices can print photos as well as film. I have seen some really nice photo prints from digital but, on the average, still not as good as well developed film.
      • I posted simply to pre-empt the inevitable stream of "Digital sux, its for chumps and vain people, film rulez!" posts that seem to always occur when Slashdot posts a story about digital shooting.

        You're right, an EOS-1D is still pretty pricey... But you should be happy about its success. As Canon (for example) has continued to release new models, the prices of the low-end pro cameras like the EOS-D30 (nearly on par with 35mm pro film quality, much better than any 35mm consumer film quality) have dropped like a rock on the used market, to similar price points of high-end consumer digitals.

        If the innovation continues at this pace and Canon and Nikon continue to flood the market with better and better cameras, you will soon be able to buy a better-than-35mm pro digital system for approximately the same price as a 35mm film system. Of course, the only problem is that you will still be drooling over the high-end models, which will continue to improve...

    • Good luck bub.

      Digital sucks when it comes to zooming, panning, tilting, or yawing (i.e., any camera movement). The sad fact is that you get artifacts and skips no matter what your speed or resolution. Until the capture rate is high enough that the human eye can't perceive the problems (that ol' DA boxcar versus the analog sine wave), it will never look good enough. At that point (petabyte storage, anyone?) you have achieved quality that analog film had for the past 40 years.

      • See my earlier post about 1/16,000 shutter speed on an EOS-1D. It's great for sport shooting! I challenge you generate an "artifact" from movement at 1/2000, much less eight times that speed. Yes, we are talking digital.

        The human eye? The human eye sees the print when it is finished, after the camera has captured it at such speeds. I challenge you to recognize anything with your "human eye" even if it is shown to you for a whopping 1/500 of a second!

        Or are you talking about viewfinders? Pro digitals use glass, through-the-lens SLR viewfinders, just like film cameras. And consumer digital cameras (i.e. Olympus E line) are starting to use glass through-the-lens viewfinders, too.

        If you're merely talking about the EVF (i.e. LCD) viewfinders in some consumer cameras, then you have a point -- these are difficult to use when framing a shot. But it has little bearing on the quality of the digital sensor itself or the quality of the image, and as I mentioned, no serious amateur or pro would buy a camera that uses an EVF anyway! Certainly not all digitals are saddled with this limitation, nor is it an inherent limitation of a digital camera.

        People should become educated before they post, "Bub."
    • And before you comment on other film sizes, realize also that many of the largest advertising companies shooting commercial spreads abandoned film long ago and are shooting with digital medium format or large format backs. Yes, many of the fashion or product spreads you see in your favorite checkout stand magazine are in fact digital these days.

      you're absolutely right, and that's all very fine and good for ad and fashion companies, who need to get their images processed and laid-out as quickly as possible; but there's still absolutely no comparsion between film and digital for large-format artistic work, where the quality of the image is key.
      before somebody me an example of an arthouse that's gone all-digital, i've looked at 8x10s from a Phase One H20 [vistek.ca] next to contact-print 8x10s from Fuji and Kodak film, and while it's reasonably close, the film prints still blow the digital print away. it's really visble in the tonal changes and ultrafine detail - the H20 is 4080x4080, but good fine-grained film is ~3000dpi (percieved even finer in color film, since the three stacked layers of emulsion tend to fuzz out the detail of the grain in any one of them). makes an 8x10 24,000x30,000...that's a lot of grains/pixels; call me back whn there's a digital back that large. so while digital is making huge inroads in a lot of areas of photography, i think it's safe to say that for situations where image quality is the main concern, large-format film has nothing to worry about for awhile.
    • by harrkev (623093) <kfmsd@NoSPAM.harrelsonfamily.org> on Wednesday January 08, 2003 @03:19PM (#5041748) Homepage
      Before all of the replies saying that digital is for geeks and film will forever rule, please be sure that you have used current and professional quality digital gear, including 35mm gear made by Canon or Nikon with standard lens mounts, digital medium or digital large format backs (depending on the type of vs. film comparison you plan to make).
      I disagree. You can put together quite a nice film-based SLR system for around $500-800 or so (camera and lenses -- tripods/bags/filters extra). To get similar quality from a digital SLR would add at least $1000 (probably more) to the price tag. $1000 will buy a lot of film and processing. I am sticking with film for now.

      I don't want to star a flame war, but look at resale prices for digital vs. film. Even 20-year-old film cameras can still command a respectable resale value. A 3-year-old digicam is almost considered worthless these days.

      • It all depends on your definition of "nice film-based SLR." I was $3-4k into Canon film cameras before I bought my D60; I don't think that's uncommon--one lens now, a new flash later, then a new body, it all adds up over the years.

        So, adding a $2,200 D60 wasn't a *huge* step, price-wise. I've had it around 6 months, and I've shot around 7,000 frames with it. Assuming for the moment that I'd have shot the same number of frames had I been using film, that averages out to $0.35/frame, which is in the same general range as film and processing (that's $10 for 36 exposures).

        Assuming that I've got at least another couple years of functional use in the camera, the per-frame cost should drop down under a dime. Plus, I get instant feedback (nice when fiddling with lighting problems) and it's easier for me to sort, edit, and produce prints with digital then it is with film.

        So, with six months of use, you can start to argue that it's paid for itself. Add another couple years of use, and it'll be hard to argue that it would have been cheaper to use film. So, even if it has no resale value in 3 years, it'll still have been a good move, financially speaking.

        I suppose it all depends on how much you shoot.
      • You are right that film SLR is much more cheaper than similar quality digital camera in low end market. However, at high end, the film cameras are becoming overall more price competitive. Professional often take hundreds of photos for every photo published in print magazine.

        Also, as time goes, digital will overtake low end market too. Last March, I bought 4M pixel digital camera for just $250. Couple of months later, in a party, I used Canon SLR and this camera. I used standard ISO-200 film and developed at local grocery store for films. For digital, I used one of the digital labs which prints for just 14 cents a copy. My judgement is that digital prints are better. Besides, I only got the interesting ones printed. Also, no need to keep track of negatives. That was the last time, I used my SLR.

        At the best quality level, film cameras are equivalent to 6-9 mega pixels. At regular quality (ISO-200 print film developed at grocery store), they are close to 2-3 mega pixels. A relatively cheap ($150) digital camera is likely to beat its P/S film counter part.

        Anybody who wants to do new $150+ investment in photography, I would seriously advise him/her to consider digital alternative.

      • I don't want to star a flame war, but look at resale prices for digital vs. film. Even 20-year-old film cameras can still command a respectable resale value. A 3-year-old digicam is almost considered worthless these days.
        That's because current film cameras are arguably not any better than a 20 year old (high quality) one. In fact, some people consider them worse, since they dislike some new 'features', and the fact that new cameras are designed to a price point, and are almost disposable. Digital technology is still young, and new digital cameras are getting much better each year.
      • I disagree. You can put together quite a nice film-based SLR system for around $500-800 or so (camera and lenses -- tripods/bags/filters extra). To get similar quality from a digital SLR would add at least $1000 (probably more) to the price tag. $1000 will buy a lot of film and processing. I am sticking with film for now.

        My Canon A1 has sat on the shelf for about two years now; the only time it's been used has been when the digicam (Olympus C2100UZ [dpreview.com]) was away for repair. Yes, the Canon is a slightly better camera and at the limits takes bettwe pictures - the Olympus is slightly flimsy, its viewfinder isn't really good enough for precise manual focus and its autofocus isn't always trustworthy. But the Olympus is far more versatile and far more useable. I take far more photographs with it. As to the range of photograhic situations it's useful for, I've taken a lot of wildlife photographs, including dragonflies and other insects. I've taken a lot of night-time landscapes, moonlight and starlight shots. I've taken literally hundreds of photographs from and of fast moving boats in bumpy water. And of course I've taken the usual photos of house, friends, pets, etc.

        As for resolution, 1600x1200 pixels is good enough for 8"x10" photos and doesn't look too bad blown up even further; obviously it isn't as good as 2000x3000 [jasmine.org.uk]. But for the amateur photographer the digi wins every time. It's lighter and more conenient to carry around, while still having as wide a range of focal lengths (equivalent of 38mm to 400mm) as I've ever carried. It takes snapshots without need for thought; and if you want to set things up to take a proper photograph, control over everything - shutter, aperture, focus, focal length - is there.

        You'll get the little Olympus for the same $500ish you were quoting for an SLR kit, but provided you use rechargeable batteries that's all you'll pay. With an SLR, every shot you take costs film and processing, so if like me you take several thousand photographs a year that easily adds up to more money than the camera.

        The next camera I buy will have a metal chasis and a proper optical viewfinder. It will also be more optimised for manual focus than the Olympus. But it will definitely be a digital - there's no way I'm going back to film.

  • by sweetooth (21075) on Wednesday January 08, 2003 @02:18PM (#5041223) Homepage
    Comparing the SD9 and a Nikon CoolPix 2500 is hardly a fair comparison. They compare pictures from an $1800 SD9, a $300 Nikon Coolpix 2500, and a $2300 35mm Nikon F5 film camera. Hell, replace the coolpix 2500 with a coolpix 5700, Nikon D100, or Nikon D1 and this comparison will mean more.

    The tech is cool, but the comparison makes it seem like biased reporting.
    • I would disagree, any new technology is going to be expensive, so using price as a guide(at least at first) is really unfair.
      Also, both cameras were 2MP cameras, how is that biased?

      • The technologies are differant, therefore a 2mp vs 2mp comparison isn't necessarily fair. This price differance doesn't appear to be a tech issue. If it was they wouldn't have chosen an expensive 35mm camera. The 35mm camera and the SD9 aren't exactly your low end consumer camera, the coolpix 2500 is your entry level digital camera. The SD9 is competitivly priced based on quality not on MP. So a comparison of a D60 or D100 would make much more sense as it is of similar quality and similar price.
    • DP Review has a great review of the SD9. They compare it with the Canon D60 (6 megapixel vs the SD9's 3 megapixel). The Foveon x3 in the Sigma equals the SD9 for quality. This is because the x3 contains 10 million photodetectors, as opposed to the 6 million in the D60. Very impressive.

      HH
      --
  • by argmanah (616458)
    At some point higher resolutions and more colors per pixel ceases to be impressive. I honestly can't tell the difference between a normal sized print of a 3 megapixel camera and a 5 megapixel one.

    While I'm sure at the professional photography level this is a tremendous advancement, I think to the consumer this is just another step to making their digital photos take up even more space on the memory card/stick/etc.

    • by Cuthalion (65550)
      My take on the matter (As an owner of an EOS D-60 (comparable effective resolution to the sigma (though it uses more pixels to do it, because it's got no foveon)) is that the extra pixels really come in handy when you want to crop your picture afterwards. The more and more accurate information you have, the more you can do with it in photoshop afterwards.
    • by ckaminski (82854)
      No one is making you take 6MP pictures in RAW mode with that Nikon D100. You can always take 6MP jpgs, which are unbelievably smaller. It's all a tradeoff, one which film camera's don't give me. I don't have the benefit of being able to get 60 shots out of a 36 frame 35mm roll if I want a lesser quality than I'm capable of.

      As to your substantive comparison of 3 and 5 MP images, I guarantee when you start looking at 8x10's, you *WILL* notice the difference. But you are right, with 4x6's and 5x7's, there is actually very little difference between 3 and 5MP cameras, other than the more sophisticated color processing that has come about in the past few years (if we're comparing old CCD's to new CCD's).

      -Chris
  • The new thing being done here is making each pixel sensitive to all three colours at once, but couldn't we concentrate to making the pixels smaller so that pretty soon we could squeeze three into the space one used to occupy, achieving the same affect?
    • No matter how small you make them, you're still throwing away 2/3 of the information. That's the big leap here; no fuzzy interpolation, no artifacts, no guessing.
    • Not really. The overall size of the sensor actually matters because of optics. The larger the total sensor size, the easier it is to make wide-angle lenses and lenses with very shallow Depth of Field, suitable for portraiture, for example. Also, as the pixel size increases, the noise tends to decrease and sensitivity increase. So making pixels smaller means that you have to combat more noise and work more sensitivity into the new, smaller pixel, and you have to put enough of them together to keep the total sensor size the same, or preferrably bigger.


      Also, issues with "separate" pixels are how many pixels for what color (usually there are more green pixels than other pixels, for psychovisual reasons), what tiling pattern you put them in, how you combat moire, and how you interpolate/combine the data that you have. No one solution, stacking pixels a'la Foveon, SuperCCD a'la Fuji etc is really better or worse. They each have their drawbacks which resonate far into the firmware and algorithms. Also, there is the issue of sensor type. Currently we have CCD (various types, actually, as any astronomer can tell you), X3 and CMOS, and each is continually being improved. Technical progress with one type may well surpass a theoretically more pleasing design.

    • Yes you could, but increasing the number of pixels on the sensor increases the cost. The foveon x3 sensor captures detail roughly equivalent to 2x (not 3x) that of a conventional sensor. So a 3 megapixel foveon == 6 megapixel conventional, but costs less. A 6 megapixel foveon (which doesn't exist yet) would equal 12 conventional and very expensive megapixels.

      HH
  • by Kaa (21510) on Wednesday January 08, 2003 @02:20PM (#5041249) Homepage
    First of all, Foveon sensors do NOT see the world like a human eye does. This is obvious to anyone who has even the slightest idea about the anatomy/neurology of the human eye, but of course that automatically excludes marketers...

    Second, there is an active discussion of Foveon advantaged/disadvantages on sites like www.dpreview.com and it seems that the general consensus is that it's a promising technology, but needs more work. Yes, it's good in some areas, but the current implementation is lacking in some others.

    Third, a sensor is not the only important part in digital photography. Basically, the advantage of Foveon is that its images do not suffer from certain artifacts that the conventions Bayerian sensors have to deal with. That's not such a huge deal.

    All in all, a Foveon sensor is technically better, but that doesn't necessarily mean it'll be more successful in the marketplace... So far it's only available on a Sigma platform and no serious photographer is interested in building his photo system out of Sigma cameras and Sigma optics.
    • Here's [tedmontgomery.com] a good summary of the anatomy, physiology, and pathology of the human eye. Nature's machines are much cooler and much more advanced than ours are, but she's been at it a bit longer...
    • no serious photographer is interested in building his photo system out of Sigma cameras and Sigma optics.

      Not to mention that the SD9 only shoots RAW files, which you MUST "develop" using the Sigma-provided software in order to convert them to a useable format (i.e. jpeg, tiff, ...) Not only is the software proprietary, but you can't get it if you don't own the camera. And it's also awefully slow (1 minute or so per image) and the batch mode sucks: the exposure of individual images will be set to exposure of the first frame!!!

      For the moment, I say no thanks. Great sensor and promising technology but let's give it a couple of years to mature.

  • So here is how digital cameras currently "see" light. (Color being different frequencies of light waves):
    The light comes in through the lens.
    The light is filtered through the charged coupled device (CCD).
    This is where photons are translated to pixels. (Terry Pratchett readers will call this the painting demon.) This is also where all of the non-lens work is done. (White Balance, Compression, Color Interpretation, Sharpness, Saturation)
    The resulting data is written to an image file with all sorts of fun Exif information (image tag info.) and
    Voila! A new image is born.

    All of this research is going in at the CCD level. I am interested to see how well it compares to the trained photographer's eye's interpretation of color.

    Art=!Elephant Shit.
  • If I look at an object of [presumably] fixed colour, I actually see slightly different colour tints with each eye.

    If it is of any relevance, my Iris' are also not well defined in colour - my left eye is predominantly green, whilst my right eye is obviously more bluey (but nowhere near as blue as a person with "blue" eyes).

    I can pass all colour blind tests with, er, flying colours.
  • This looks cool, but to really appreciate the difference, we would need new screens to look at these pictures, wouldn't we?

    Can anybody provide some insight about that?

    • No new screens would be needed. This new sensor only affects the way an image is captured, not how it is displayed. Current CCD chips actually use 4 "pixels" to record each pixel of the image. 1 red sensing pixel, 1 blue sensing pixel, and 2 green sensing pixels. It is set up like the following for each pixel the camera records...

      RG
      GB

      The CCD device in a digital camera has one of these set up for every pixel the camera is to capture.

      This new way will allow all 3 colors to be captured on one "pixel" instead of 4, so that will allow much higher resolution pictures to be taken. Hopefully this simplified explanation makes sense, and didn't totally confuse everyone :)
      • Just to be clear... (Score:3, Informative)

        by raygundan (16760)
        The resolution (as determined by number of pixels) will not get better. Manufacturers are currently counting each one-color pixel in the

        RG
        GB

        blocks as one. That block is 4 pixels. Foveon-based cameras would have

        (RGB) (RGB)
        (RGB) (RGB)

        which is still 4 pixels, but gives you more accurate color information at each pixel and reduces moire. So, while there will not be any more pixels per area with Foveon CCDs, the *effective* picture resolution will be much better.

        I wish I had known this before I shopped for digicams-- it feels like false advertising to me, and I learned after I had made my purchase. Manufacturers ought to be required to state "4 single-color Megapixels" or "1 Megapixel effective with color" for 4MP cameras with traditional CCDs.

  • The Foveon sensor has been much hyped, and due "any time now" for years. Well, it's finally here, being used in a digital SLR by Sigma [dpreview.com]. It does indeed seem to have a lot of potential, but it's not perfect yet. Basically, camera makers need to play with it some more to get their firmware exactly right. Also, the sensor itself isn't as sensitive in low light as current models. But it's competitive already. Future versions should be even moreso, but it depends on how much it can be improved, and at reasonable cost. Only time will tell...
  • http://www.sigma-photo.com/ -- an actual manufacturer.

    This is an incredibly awesome technology and I wish against wishing i could just drop in my Fuji and go with it rather than having to drop about $3k when the tech makes the rounds to Fuji/Canon/Minolta. This really is what digital photography needs, it's going to be as big a boost to the market as was the single lens motion picture camera or kodachrome. No more moire, no more "interpolation," no more expensive low light high sensitivity CCDs, cameras using this can be cheaper because of this. Less jaggies. All the minor stuff that's keeping film afficianados out of the digital age are going to go away.

    Of course, for joe q megapixel, there's going to be no benefit whatsoever. It's not going to make the digital zoom better or make the software to send 640x480 snapshots of the baby's ass to grandma any easier. And this may be the reason why the biggest names haven't touched this now year old technology. Or it could be that they're trying to find a way out of licensing it...Fuji'll probably adapt their own kickass "hexagonal" pixelalignment to the idea of single pixel tech and make a good product that much better.
  • Cheaper (Score:2, Funny)

    by MojoMonkey (444942)
    I'd be cheaper for me to just gain the ability to hold my damn current digital camera still while taking a picture. That would improve the pictures of my pooch 100%.
  • Reviews, etc. (Score:3, Insightful)

    by skatedork (139277) on Wednesday January 08, 2003 @02:31PM (#5041331) Homepage

    A good review is at dpreview.com (skip to conclusion [dpreview.com] if you're in a hurry).

    This technology still has a way to go, but the SD9 certainly is an interesting camera.

    One huge problem is with adaptation - Sigma makes consumer-grade lenses and cameras, some of which are of poor quality (but quite affordable). For these cameras to be adapted by professionals, Sigma need to create a camera with Canon or Nikon mounts, but furthermore, they need to erase the stigma attached to their equipment by many professional photographers.

    If they were to make a full-frame sensor in a Canon mount that worked better at higher ISOs, this camera would be a huge seller.

  • Astrophotography? (Score:3, Interesting)

    by SoCalChris (573049) on Wednesday January 08, 2003 @02:32PM (#5041338) Journal
    Since this new chip is able to gather more light than traditional CCD chips, I would imagine that there will be some interesting uses for it in astrophotography. Instead of having to use a CCD imager with a 30 minute exposure to get an image, wouldn't you technically be able to get a higher resolution pic with this a lot quicker?

    That's just a thought...
    • Re:Astrophotography? (Score:3, Interesting)

      by tjwhaynes (114792)

      Since this new chip is able to gather more light than traditional CCD chips, I would imagine that there will be some interesting uses for it in astrophotography. Instead of having to use a CCD imager with a 30 minute exposure to get an image, wouldn't you technically be able to get a higher resolution pic with this a lot quicker?

      All the serious astrophotography I've done has been carried out with single waveband CCDs and filters, rather than colour CCDs so you would get an equivalent depth of image with the old style CCDs to the new X3 sensor for the same exposure time. However, the X3 sensor provides the advantage of doing three bands simultaneously but I would want to see the data sheets for the wavebands for each layer to see whether it could be used for colour measurements. I suspect that if you want more than just a good colour piccy, you are stuck with the R, G, Gb, B, V, etc. filters.

      Cheers,

      Toby Haynes

      P.S. in case you wondered which telescope I used for my astrophotography take a look :-) [pparc.ac.uk]

  • by dave_f1m (602921) on Wednesday January 08, 2003 @02:34PM (#5041354)
    Great, now I can stop scanning in those 21Mpixel images from film, and get a 10Mpixel digital camera. Since it uses 3 layers, those pixels must count for more than twice as many from the 35mm film. And the dynamic range is surely greater tha slide film. Finally the shadow detail in that otherwise brightly lit scene that I needed to use slide film, and capture at 48bit can be resolved with a 24bit image! Now I won't need more memory - my files will be 1/4th the size, and look just as good!
    And it sees just like we do! Same 3 colors, same intensity relations, all on each pixel! Because everyone knows the human eye has only one kind of sensor in it. It's not like mammal eyes that have rods and cones.

    Sorry, film will be around a little longer....

    - dave f.
  • From the X3 link

    "A Dramatically Different Design
    The revolutionary design of Foveon X3 image sensors features three layers of photodetectors. The layers are embedded in silicon to take advantage of the fact that red, green and blue light penetrate silicon to different depths--forming the world's first full-color
  • The human eye? (Score:5, Interesting)

    by kaphka (50736) <1nv7b001@sneakemail.com> on Wednesday January 08, 2003 @02:37PM (#5041379)
    I hate to break it to y'all, but in the human eye, each spot in the fovea is occupied by one receptor, which is maximally sensitive at one wavelength -- in other words, it works the way that current digital cameras work. (Random Googled link. [univie.ac.at]) I suppose that if the human eye needed to determine the color of a particular "pixel", it would have to interpolate, just like a CCD camera... but that's a moot point, because that doesn't actually happen in our visual system. (It's much, much more complicated than that.)

    Now, this technology does sound like a great way to increase the resolution of digital cameras, if it's feasible. However, all this "neuromorphic" stuff is pure marketing. (Though I admit that "Foveon" is a clever name.)
  • by Traa (158207) on Wednesday January 08, 2003 @02:45PM (#5041443) Homepage Journal
    As much as Foveon's well hyped and widely advertised (*cough*thanksslashdot*cough*) idea seems to make sense on the surface, their solution is far from perfect.

    To sense an RGB (Red, Green, Blue) pixel one can use a veriety of methods. At the center of this technology lies the ability to turn a stream of photons into an electric current. This photodetector is colorblind, it is only capable of measuring the _amount_ of light, not it's color. To recognize color the estheblished method used to be to put several photodetectors near each other and put color filters in front of them. The most widely used color filter array is known as the Bayer pattern and consists of 2 green photodetectors (diagonal from each other) a blue and a red detector in a 2x2 grid. These 2x2 blocks are then repeated over and over to create the full image sensor.
    Specialized software or hardware needs to take these individual Red, Green or Blue pixels and recreate a single RGB pixel, this technique is known as demosaicing. The major advantage of this method is the simplicity of the photodiode (photodetector). It allows for the creation of very dense image sensors that are now passing the 10MegaPixel barrier while keeping the cost down (start seeing 5MegaPix sensors for less then $100 before the end of this year).

    Foveon's approach is to layer these color filters vertically.

    The good:
    - idealy you get R,G,B at each pixel.

    The bad:
    - very complex layered photodiode technology, this makes the pixels significantly bigger. Currently the pixels are bigger then a 2x2 bayer image pixel. The complexity also adds to the manifacturing cost, these chips will not be cheap for the forseable future.
    - Color bleeding. For example: Photons in the green wavelenght do not nescecarily stop in the green layer, but might be picked up by the underlying red layer. This means that specialized hardware needs to apply a non-trivial color correction for each pixel layer.

    Foveon's idea is a very interesting approach. Since they nicely pattented their idea shut, we will have to patiently wait for this single company to provide the world with this technology.

    Side fact: The human eye see's colors using pigments that respond differently to different wavelengths. In the simplest model we can say that we see Red Green and Blue with spatially seperated pigments that resemble a bayer image sensor closer then the foveon's sensor.
    • Specialized software or hardware needs to take these individual Red, Green or Blue pixels and recreate a single RGB pixel, this technique is known as demosaicing.

      Wrong. Said software or hardware takes two green pixels, a red pixel, and a blue pixel and recreates four RGB pixels. It conjures two thirds of its information out of thin air. (I've written software [duke.edu] to do this for the Color Quickcam.) The worst two effects of this hack are color moire and blurring. Color moire is when detailed B&W objects (detail above the Nyquist frequency) gets colorful edges. Blurring is the loss of detail that occurs when cameras use an anti-alias filter to reduce color moire.

      dpreview.com has an excellent review [dpreview.com] of the Sigma SD9 in which they examine the pros and cons of the Foveon image sensor. It really does eliminate both color moire and blurring, but there new artifacts to be fixed.

  • Quantum efficiency (Score:2, Interesting)

    by suitti (447395)
    Continuous tones per pixel is good. But, one nice thing about CCDs is their high quantum efficiency. This helps in low light conditions and with fast action. As I understand it, CMOS detectors aren't as good. But, with three layers to draw on, it may be improved. Is it?
  • by the eric conspiracy (20178) on Wednesday January 08, 2003 @02:46PM (#5041449)
    There is no way this camera sees like the human eye - this sensor arrangement is completely different from the rod/cone structure of the human eye. A conventional digicam is actually closer than this is.

    As far as this camera comparing it to film - more baloney. A good 35 mm camera on a tripod is capable of somewhere 11-14 megapixels of in a conventional digital camera. This particular sensor does not deliver resolution in that ballpark.

  • You didn't ust have an earlier story about Foveon, you had an earlier story about exactly this sensor. Geeze at least wait for a new development before posting an article.
  • by TygerFish (176957)
    I don't know about advertiser's claims and frankly scarlet... What I see when I look at these pictures is a camera that takes some very good pictures. True, it could probably use better *something* but even at high resolutions the images I saw seemed pretty good for the most part.

    Sure, Sigma is not stellar quality, but those images werevery vibrant.
  • by mnmn (145599) on Wednesday January 08, 2003 @03:04PM (#5041610) Homepage


    Too much hype. All they did was stack pixel detectors rather than mosaic them. The mosaic was simpler and now cheaper, this thing costs $1800 in a camera, else I'm sure someone could've come up with it. The real accomplishment is creating those silicon layers precisely, not coming up with lets stack em

    They say the resolution is like a 120mm film, and the color lattitude is big. So are CMOS sensors in Canon and Nikon's cameras. Checkout the awesome photos on photo.net [photo.net]. A lot of those have been shot by modern digital cameras with CCDs and they dont look bad. Mead has his own marketing to do to try and take Foveon to Intel and Microsofts level, so he has to push down CCD. Theres a reason why people are buying digital cameras with sensors smaller than fingernails and submitting their pictures on professional photography site. I think Mead has work to do.
  • by ChaosDiscord (4913) on Wednesday January 08, 2003 @03:16PM (#5041717) Homepage Journal

    It's a neat technique to increase resolution, but the implication that the article gives that you need this technique to improve resolution is silly. Effectively each grouping of red, green, and blue sensing points in a CCD camera returns a single pixel. If you replace each red sensor with three smaller sensors (one red, one green, and one blue), you'll get the same increase in resolution. In theory you could lose data because a little bit of blue light hit the red sensor, but not the blue one, but in practice it isn't an issue. Assuming you can keep making the sensors small, you can keep scaling the resolution of CCD technology.

    This is neat technology and may well improve the quality of cameras to come. But it's not essential to improving the quality of cameras.

  • Hype, hype, hype... (Score:3, Informative)

    by AyeRoxor! (471669) on Wednesday January 08, 2003 @03:48PM (#5042035) Journal
    This is amazing technology, and it will revolutionize digital cameras if/when it comes down in price. HOWEVER, this is not how the human optic system works. Even in our optics, we have seperate receptors for red, green, and blue, and our brains do the interpolating. As most will remember from basic elementary biology, our eyes detect light through rods and cones. All quotes are from this [cs.tcd.ie] link. "The retina has ~126 million photo receptors, 120 million rods and 6 million cones." Rods gather any light they can, and compile the data together to show the best possible image in the dimmest light; therefore, rods will display a black and white image. This is why the darker it gets, the harder it is to differentiate yellow from white: you are depending more and more on the rods.

    HERE is where it gets interesting, and where I get to my point. Cones are what we use to see color. An individual cone cannot see red green and blue as this marketing hype would lead us to believe. "The cones come in three types: Red (60%), Green (30%) and Blue (10%). The red and green cones are randomly distributed in the center of the fovea and the blue cones form an annulus around the outside." So in effect this camera will actually surpass the human eye.

    As a side note, the link goes to a very interesting document that states how "126 million photoreceptors must be transmitted to the brain via 1 million fibers in the optic nerve [while] [t]he overall compression ratio of 126:1 is not evenly distributed." Check it out.
  • Achilles' heal (Score:5, Interesting)

    by Steve525 (236741) on Wednesday January 08, 2003 @04:48PM (#5042629)
    I just finished reading the review at dpreview. (Thanks to all the people who posted the link). There may be a serious issue with this technology. In the review they mention "color clipping". Once one of the color channels reaches saturation, all color information is lost. This may be inherent in the X3 design.

    The detector works by the difference in absorption of the colors of light. The first layer sees a lot of blue, with some green and red. The next layer sees a lot of green with some red and a little blue. The last layer sees a lot of red with only a little blue and green. What this means is that in order to determine the true colors of the reverse of this process needs to be calculated. However, if any of the detectors saturate (and the first is the most likely one), there probably is no accurate way to do this reversal. Currently, it looks like the camera makes these pixels grey, which looks aweful. They will need to come up with a better way of estimating the color of these pixels if this technology is to work well, and I have no idea if that's possible.

    Note that a standard CCD with separate pixels can also have one of it's channels saturate. In this case, however, the pixel will simply become whiter than it should, which looks natural.

Put no trust in cryptic comments.

Working...