Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
China Technology

Graphene-Based Image Sensor To Enhance Low-Light Photography 103

cylonlover writes "A team of scientists at Nanyang Technological University (NTU) in Singapore has developed a new image sensor from graphene that promises to improve the quality of images captured in low light conditions. In tests, it has proved to be 1,000 times more sensitive to light than existing complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) camera sensors in addition to operating at much lower voltages, consequently using 10 times less energy."
This discussion has been archived. No new comments can be posted.

Graphene-Based Image Sensor To Enhance Low-Light Photography

Comments Filter:
  • by Anonymous Coward on Sunday June 02, 2013 @03:32AM (#43887455)

    There was this article on slashdot 4 years ago, http://science.slashdot.org/story/09/07/23/1819215/people-emit-visible-light.

    Summary:

    "The human body literally glows, emitting a visible light in extremely small quantities at levels that rise and fall with the day, scientists now reveal. Japanese researchers have shown that the body emits visible light, 1,000 times less intense than the levels to which our naked eyes are sensitive. In fact, virtually all living creatures emit very weak light, which is thought to be a byproduct of biochemical reactions involving free radicals."

    So humans emit light that is 1,000 times too weak to detect, but this new sensor is 1,000 more sensitive to light, what a coincident! I imagine this would have great applications in the health industry eg. passive health assessment. Or another use might be a better lie detector :)

    • 1000 times better than cmos and ccd technology. Doesn't mean it's better than the human eye.

      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday June 02, 2013 @08:10AM (#43888121) Homepage Journal

        It might not be better than the human eye, which can detect single photons, but it might be better than the human eye plus the human brain, which tends to ignore such stimuli.

        • It doesn't matter all that much that human eyes can detect single photons; since the photoelectric effect is a quantum process, it's not difficult to do that. What matters is the percentage of incident photons that you can successfully detect.
          • by chihowa ( 366380 )

            This is typically described as quantum efficiency (QE) and is a measure of (detected photons)/(incident photons). Decent scientific CCDs have a QE above 95% across much of the visible-NIR spectrum. According to this paper [sciencedirect.com], the human eye has a QE of around 1% in low light conditions. Overall, it looks as though the eye is a pretty lossy system.

            • Your CCD numbers seem way too optimistic [hamamatsu.com] to me. Also, I believe that the eye is somewhat more efficient. But it doesn't matter all that much, the fact that with a camera, you can integrate the light over a vastly longer period seems much more important to me.
              • by chihowa ( 366380 )

                It looks like I overstated the QE over the entire vis-NIR, but I use an Andor iXon 897 daily and it's over 95% QE [andor.com] in the bit of spectrum I use.

                What's your basis for the eye being more efficient? I'm genuinely interested, as my research is in imaging. The paper I referenced is old, but the methods are sound. I can't imagine them being off by a considerable amount.

                Also, I don't mean to disparage the eye in any sense. The eye is a fantastic piece of machinery. It's actually capable of integrating over differen

        • There is no scientific evidence to support humans being able to utilize their eyes in such a way. But there is quite some precedent for mystics and auras. There have been a few studies. But they didn't want to say one way or the other. They ruled out the impossibility but pointed out the improbability of humans being able to see effects like these.

          Than again. There is training which could make some better than others. But the sample size for people who claim to have this training is abysmal. And MOST are cr

          • by HiThere ( 15173 )

            Remember that you only get the Bene Gesserits and Mentats after the Butlerian Jihad. First you destroy the thinking machines, then you create people as replacements. Also note that the Bene Gesserit skills are due to a *combination* of training and genetics. (Not sure about Mentats, but it's probably true for them also. There's an early comment about Paul being capable of being trained as a Mentat, which clearly implies that most folk weren't.)

            So perhaps this only becomes possible after a bit of genetic

      • 1000 times better than cmos and ccd technology. Doesn't mean it's better than the human eye.

        Well, considering that even an average CCD chip is several times more sensitive than human eyes under the best conditions, you're wrong.

        • by tibit ( 1762298 )

          Nope. There's not much more sensitivity anything can have compared to a human eye since the human eye can sense about one in then photons hitting the retina, with additional losses due to scattering and reflections in the solid parts of the eye. Basically all any sensor can be is an order of magnitude more sensitive than our retina. That's it. If you actually knew what you're talking about, you'd know that it's a very tall order. Just read about what amateur astronomers have to deal with to image the sky!

          • the human eye can sense about one in then photons hitting the retina

            Did you just say "one in ten" or did something elude me?

            Basically all any sensor can be is an order of magnitude more sensitive than our retina. That's it.

            And I've said "several times", which does not contradict your claim (nor does your claim contradict mine).

            If you actually knew what you're talking about, you'd know that it's a very tall order.

            Yes, I actually know what I'm talking about. I've known about vacuum photomultipliers since I was ten (CCDs were fancy new stuff back then).

          • by drkim ( 1559875 )

            Nope. There's not much more sensitivity anything can have compared to a human eye...

            (Shhhhh. Nobody tell him about night vision optics.)

            • by tibit ( 1762298 )

              An order of magnitude better is not much, in a grander scheme of things. It's basically the minimum improvement needed, in terms of light sensitivity, to even be worth talking about if we're to talk about fundamental improvements. Night vision optics mainly cope with more mundane sort of a problem. You are around light sources and your vision can't ever fully adapt. Also you haven't got an hour or two sometimes needed to fully adapt. Those mundane sort of issues reduce the effective sensitivity of your reti

        • Show me a picture in a dark room with a shutter speed of less than 1/20th of a second that is more than just noise and I'll believe you. If I can see something and a camera can't without a noticeably long shutter speed and no flash, then my eyes are more sensitive than the camera.

          • I don't have a CCD camera with human visual cortex attached to it to process the inputs. How would go about comparing the two imaging sensors when the raw data gets treated and later perceived in such vastly different ways?
      • Re: (Score:1, Informative)

        by femtobyte ( 710429 )

        No, this is not 1000x better than CMOS/CCD; it's 1000x better than previous graphene detectors --- which are far worse in the visible range than CMOS/CCD, but can sense out to the 10um mid-infrared band, which other sensors can't.

        • FTFA

          In tests, it has proved to be 1,000 times more sensitive to light than existing complementary metal-oxide-semiconductor (CMOS) or charge-coupled device (CCD) camera sensors in addition to operating at much lower voltages, consequently using 10 times less energy.

          Do you know something the article doesn't?

          • by Anonymous Coward

            Read the original, Really Fine Nature Article, not what the journalist understood of it.

            • by femtobyte ( 710429 ) on Sunday June 02, 2013 @03:12PM (#43890487)

              ^^^ This. The Nature Communications article is very clear, right from the abstract, that this sensor is 1000x more sensitive than previous *graphene* sensors. *Nowhere* in the journal article is the performance compared directly against CCD/CMOS sensors, but it's trivial to tell (from the numbers given) that this sensor isn't remotely "competitive" in the visible light region. Fortunately, that's not the interesting use of the sensor --- the journal article does compare and cite advantages against other infrared sensing technologies. The researchers might have meant to say that these graphene sensors could be useful for cheap, low-power (but not high sensitivity) visible light applications --- not what the journalists have twisted this into.

          • Current silicon devices are pretty close to responding to each incident photon (say about 90%). Improving that by 1000 times is simply impossible; the limit is responding to each photon.
      • by tibit ( 1762298 )

        A darkness-adapted human eye can IIRC detect one in ten photons; that's a pretty tall order for any room-temperature image sensor AFAIK. Please correct me if I mis-remember the figures, though.

    • by OzPeter ( 195038 )

      I imagine this would have great applications in the health industry eg. passive health assessment. Or another use might be a better lie detector :)

      So science is going discover the human "Aura"???

      • by tmosley ( 996283 )
        God. Damn. It.
      • Already did. http://science.slashdot.org/story/09/07/23/1819215/people-emit-visible-light [slashdot.org] lol

        Though the Aura in that study is not specifically called an Aura and it different than the other auras.

        Though we do emit other frequencies of radiation. None that are proven to be detectable by other humans.

      • by HiThere ( 15173 )

        "Aura" is insufficiently well defined to be definitively matched against any possible scientific discovery. It's also so poorly defined that many people match it against "Kirilian Phtography"...but they don't and can't convince others who think it's something else (or that it doesn't exist).

        FWIW, it's so poorly defined that the use of the term "aura" to describe portions of the experience of migraine headaches or epileptic seizures as "aura" isn't definitively a different usage.

    • Comment removed based on user account deletion
  • Real world graphene? (Score:4, Interesting)

    by phizi0n ( 1237812 ) on Sunday June 02, 2013 @03:35AM (#43887461)

    Is there any readily available consumer products, or even industrial products, that use graphene? If not then how long do we have to keep hearing about how great graphene is before we can actually use it?

  • 1000 times better? (Score:4, Informative)

    by thesupraman ( 179040 ) on Sunday June 02, 2013 @03:44AM (#43887477)

    They claim 1000 times better sensitivity than CMOS, which people seem to be swallowing hook line
    and sinker, however since there are plenty of current CMOS sensors with a Quantum Sensitiviy (QE)
    of 60% to 80% for visible light, how exactly will the convert 1000 times more efficiently than that?
    1000 times less loss would take them from 80% to 99.99%, that thats only actually 20% better...

    I would imagine they are measuring at an extreme wavelength that existing CMOS sensors do not target,
    hardly an advantage for the applications being discussed in the article (normal cameras).

    Even quite boring consumer cameras have a QE of 20% to 40%..

    • by imgod2u ( 812837 ) on Sunday June 02, 2013 @03:58AM (#43887517) Homepage

      Exposure is exponential as well. So a camera with 2x exposure goes from 80% QE to 90% QE for example. The next 2x will get you to 95.

      That may not seem like much but keep in mind that vision itself is logarithmic. So going from 98 to 99% QE gets you dramatically better results than, say 40% to 41%

      • by Bender_ ( 179208 ) on Sunday June 02, 2013 @04:24AM (#43887553) Journal

        Some people do not seem to understand the term "quantum efficiency" (QE).

        The quantum efficiciency measures the fraction of photons that are actually detected by the camera.
        An external quantum efficiency of 50% means that 50% of all incident photons are converted into electron-hole pairs and can be detected.
        There are, however, loss mechanisms that prevent all e-h pairs to be collected. But this is not off by a factor of 1000x from the theoretical limit.

        As already stated by the original poster. This figure is probebably for some other wavelengths, like far infrared, where silicon is "blind" due to its band gap.
        Since humans are very blind to this wavelengths as well, the relevance in the cameras is questionable.

        • astrophotography and radio telescopes.
        • by thegarbz ( 1787294 ) on Sunday June 02, 2013 @06:28AM (#43887843)

          This figure is probebably for some other wavelengths, like far infrared, where silicon is "blind" due to its band gap.
          Since humans are very blind to this wavelengths as well, the relevance in the cameras is questionable.

          From TFA: "The new sensor is able to detect broad spectrum light, from the visible to mid-infrared, with great sensitivity. This will make it ideal for use in all types of cameras, including infrared cameras, traffic safety cameras, satellite imaging, and more."

          Certainly doesn't sound too different to CMOS based applications, though they do mention mid-IR and most CMOS sensors drop off towards the end of the near-IR spectrum.

        • I would say that since humans are blind to this wavelength that it would be fascinating. It's a weird transformation of what we can't see into a pattern of colored pixels we can see. And I thought light photography in the dark was fun...

      • by ssam ( 2723487 )

        I dont think that is right.

        going from 80% QE to 90%QE (assuming that charge collection is near perfect, and does not change), means you detect about 12% more photons, so you use a 12% faster shutter speed, or work with 12% less light.

        exposure is logarithmic, so a 12% improvement is pretty much negligible. you need to double your sensitivity to claim a '1 stop' improvement.

    • by Anonymous Coward

      Ah, thanks your comment, I coudn't figure out how they define a 1000 fold improvement. Although I'm not sure that's how they define the improvement. Actually, normal ccd's might have a qe of a few 10%, but the rest of the electronics must still be able to detect that electron with a signal to noise of more than one. In practice, the readout noise is such that you need several tens of electrons. In single photon ccd's this is done by multiplying the amount of carriers using impact ionisation in the ccd. Thes

    • by asvravi ( 1236558 ) on Sunday June 02, 2013 @06:25AM (#43887839)
      First off, if we cut through the usual dismal quality of scientific reporting, what they made is a photodetector, not an image sensor. It detects single events rather than capture an image. The sensitivity of the detector is not the same as quantum efficiency. The sensitivity they mention here includes a "photogain" by virtue of the detector operating more or less as a light-controlled amplifier. It takes electrical input energy and simply amplifies it based on incident light. That can create a flow of many more electrons than incident photons. The same thing can possibly be also done by introducing a gain in the conventional image sensor electronics too, but having this photogain right inside the sensor should theoretically lead to better noise performance. So we would expect the paper to quantify the noise characteristics, but it is woefully sparse on the noise details - which leads me to suspect this is yet another "invention" that is never going to see the light of day.
    • Kind sir, but really we are talking about graphene here. That miracle of all miracles, that elixer for the gods. If anything could pull 1000 times better out of a 20% increase it would be graphene. Actually I had some in my breakfast this morning and it was quite tasty as well.

    • There are two issues. You point out correctly that quantum efficiency is one. The other is noise level. Most cameras do not have single electron noise on the readout - ~20 electron readout noise is more typical (I think). So if Graphene has higher gain (more electrons per photon) then at VERY low light levels it would give a better signal. At high light levels (once the shot noise on the sensor is above the electronic noise) it would not help. This is why image intensifier cameras are used in some scient

    • Yes, you are absolutely right - QE is already high even in "boring" cameras. It is also true that the readout noise of the modern boring sensors used in cellphone cameras is also of the order of a single electron (each photoelectron counts), so you can not significantly increase low-light sensitivity for the same pixel size and same exposure time. You can make large pixel (or use "binning" of the smaller ones) - and it is well known part of the existent technology. Other parameter important for the low-ligh

    • by suutar ( 1860506 )
      journalist error. Someone else pointed out in a different thread that the original Nature article is claiming a 1000x sensitivity boost over older graphene detectors; no CMOS mentioned.
  • by stenvar ( 2789879 ) on Sunday June 02, 2013 @03:46AM (#43887483)

    As I recall, quantum efficiency of current sensors is around 50%. I don't see how you can get "1000 time more sensitive".

    • Re: (Score:3, Informative)

      by BronsCon ( 927697 )
      That's 50% of visible light, as in 50% of the minimum level of light in the visible spectrum required to be seen by the naked eye. If this sensor can "see" light that is 1/500th the intensity required to be seen by the naked eye, whereas current sensors can only "see" light that is 2x the intensity required to be seen by the naked eye, then the new sensor is 1000x more sensitive. It's not rocket science; hell, it's not even physics or optical science, just plain ol' algebra.
      • by Osgeld ( 1900440 )

        and thanks to the human eye not a single difference will be noticed

        • by mcgrew ( 92797 ) *

          Your eye/brain combination is far, far better than the best camera ever made. Open a window on a bright sunny day, you can clearly see everything outside and inside. Now take a picture of it. Either nothing inside will show or nothing outside will. It seems that the graphene sensors would greatly improve this.

          • The relative abilities of eye/brain versus camera depends upon the particular quality being measured and the particular camera being used. The quality you mentioned, dynamic range, is a win for the eye, but the results you describe apply to P&S cameras with 8 bit converters, not 14 bit semipro models.
            Cameras are expected to be fairly sharp edge-to-edge. Try to read small text 45 degrees off where your vision is directed and you'll find yourself defeated: not only won't you be able to focus on the lette

      • Quantum efficiency is the percentage of photons that actually are registered by an electronic device and has nothing to do with eyes, naked or otherwise. And for photographic application, all that matters is "visible light". You can't make a sensor that registers more than 100% of all incoming photons.

        In different words, your response is complete nonsense.

        • You can't make a sensor that registers more than 100% of all incoming photons.

          Of course you can. The additional detection events are known as noise.

    • Re: (Score:2, Informative)

      by thegarbz ( 1787294 )

      With some basic maths that's how. Double the efficiency of 50% means that half of the photons that previously weren't converted will now be converted, i.e. 75% QE. Quadruple the sensitivity and a quarter of the photons that weren't converted will now be converted, i.e. 87.5% QE

      So from that if you make the sensor 1000x times more sensitive you go from a QE of 50% to a QE of 99.95%

      • Yeah, I know basic math too, but you're just guessing. If that's what they mean, it wouldn't make a big difference in low-light photography.

        So the question still is: do they mean something difference or is their work just uninteresting?

        • The way it's written it doesn't mean much at all. Getting QE up only really allows you to capture more photons which in itself would be unexciting. What is really critical is instead getting the noise floor down. The article doesn't supply much information in that regard. THAT would be good. It would be nice to be able to use a CCD that doesn't need to be cooled to low temperatures and read out really slowly line by line in the name of reducing noise.

          • The actual Nature Photonics article [nature.com] does talk about the noise floor, which is on order of 1 nanowatt of illumination. That corresponds to ~10^9 visible light photons per second --- easily 10^6 times worse than what your ordinary camera pixels are capable of. Oh, and you need cryogenic cooling to do that well. This graphene sensor is not great for visible light sensing --- what it can do (potentially) better than alternate technologies is sense light all the way from visible to 10um mid-infrared.

            • by suutar ( 1860506 )
              the actual Nature article also isn't comparing to CMOS... it's saying the new graphene sensor is 1000x better than older graphene sensors.
    • All the reporting framing this as a sensor for "1000x better" low-visible-light photography is simply crap by lazy tech journalists who can't bother to read the actual journal article. This sensor is fairly lousy in the visible light region --- claimed sensitivity down to the nanowatt level, which is more light than usually falling on your camera pixels (unless you're pointing the lens directly at the sun).
      The 1000x improvement is relative to previous *graphene* detectors, and is a 1000x increase in the amp

  • Here's the actual paper. It's paywalled though...http://www.nature.com/ncomms/journal/v4/n5/full/ncomms2830.html
  • by Mr. Chow ( 2860963 ) on Sunday June 02, 2013 @04:44AM (#43887609)
    According to the paper, "Through this scheme, we have demonstrated a high photoresponsivity of 8.61A/W, which are about three orders of magnitude higher than those in previous reports from pure monolayer graphene photodetectors.". So it is 1000x better than previous iterations of a particular variety of detector, not the detectors we actually use.
  • How do you get a decent image if you have to peer through the sticky tape used to grab the graphene?
  • Graphene, is there anything it can't do?
  • by excelsior_gr ( 969383 ) on Sunday June 02, 2013 @06:52AM (#43887901)

    Amateur photographer here. Does this mean that the camera will just be able to photograph at higher ISOs without noise (or rather, that you could use a lower ISO in darker situations), or that the sensor will be able to record a picture with a wider stop range? Digital cameras have a range of about 6-7 stops, whereas our eyes have a 16-stop range (according to Bryan Peterson). HDR can be used to remedy this, but, more often than not, the pictures seem much too blown, saturated and unnatural. Sony has an in-camera HDR function, that can be tweaked to keep the color explosion at bay, but it is not exactly it. Being able to take photos in bad light is sweet and all, but it would be much more interesting creatively to have a camera that can picture what I see, without having to set up a whole flash array for lighting up all the dark areas (and having to imagine and troubleshoot, if I have the time, the combination of a flash+natural light exposure).

    So photo-gurus, will this sensor cut it? Are there any products in the market that address the issue described above?

    • by femtobyte ( 710429 ) on Sunday June 02, 2013 @10:35AM (#43888645)

      Despite the poorly written article, this sensor tech is very *insensitive* compared to what you currently have for visible light technology. It's a 1000x improvement compared to previous wide-band graphene detectors, which can sense light from the visible out to 10um mid infrared (your camera can't do that). So no, this won't help your camera photograph at higher ISOs. And current camera sensors are within spitting distance of the theoretical physical limits on low light performance: while they've improved tremendously over the past couple decades, the noisiness of low-light pictures with the best current generation sensors is close to what you'll always be stuck with --- its the result of there being a finite number of photons, with sqrt(N) counting statistics fluctuations, available for even a "perfect" camera to see.

    • An ISO level corresponds to a specific light sensitivity level, so no, it won't mean you can use a low ISO in the dark. It would (if the article were true) mean that you could use higher ISOs without getting as much noise.

  • Next thing you know, there's going to be announcement that the kitchen sink is going to be made out of graphene.
  • Here's the actual Nature Communications article [nature.com], not a mangling by some incompetent tech journalist.

  • What can this do to help us extend the length of fiber optics, or lower the transmitter power on long runs like overseas?

  • Aren't we talking about the Kirlian effect? that's been known for many and I mean, may years. Some people whom believe in psychics would say it is our Aura. Anyways, it's nice to see that the realm of the metaphysics is about to cross in the "explained" phenomena and hopefully with some actual science to explain what we are really made of.
    • Aren't we talking about the Kirlian effect? that's been known for many and I mean, may years.

      If by "known" you mean "had a lot of bollocks talked about", then yes.

  • 10 times less than what? Unless you're already comparing two things, then that's not really a useful measurement.
    Now if you said, A uses 120W, B uses 110W, but C uses 10x less, one could assume that C uses around 20W.

    Assume we only know about A and C, we could say that C uses about only 17% the energy of A.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...