Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Hardware

Samsung Wants To Rival the Human Eye With 600MP Camera Sensors (androidcentral.com) 61

Babu Mohan writes via Android Central: In an editorial published on the company's website, Yongin Park, who heads the Sensor Business Team at Samsung's LSI division, has revealed that his team is working on a camera sensor that will be able to capture more detail than the human eye. As noted in the article, the human eye is said to match a resolution of around 500 megapixels. Samsung, however, is working on bringing a 600MP camera sensor to the market, which could be used in various fields such as smartphones, autonomous vehicles, drones, and IoT. As you would expect, however, it will take a long time for the company to actually launch a camera sensor with such a high resolution.

A 600MP sensor would be massive in size, making it nearly impossible to fit inside a modern smartphone. In order to shrink the sensor, Samsung will have to reduce the pixel size, which would require the use of pixel binning tech to ensure the smaller pixels don't result in dull pictures. Samsung's 108MP ISOCELL Bright HM1 sensor uses its proprietary 'Nonacell technology,' which boasts a 3x3 pixel structure. This allows nine 0.8um pixels to function as one large 2.4um pixel to deliver impressive low-light performance.

This discussion has been archived. No new comments can be posted.

Samsung Wants To Rival the Human Eye With 600MP Camera Sensors

Comments Filter:
  • Gut reaction is, "you can't beat physics! It's all about light gathering and sensor size!!"

    But the fact is, cellphone cameras have improved by an amazing amount over the last 10 or 15 years.

    • Gut reaction is, "you can't beat physics! It's all about light gathering and sensor size!!"

      But the fact is, cellphone cameras have improved by an amazing amount over the last 10 or 15 years.

      Yes, cell phone cameras have improved quite a bit, but the biggest advance has been photo processing. Don't let anyone fool you, it's still about light gathering and sensor size as the physics sets the limits, not marketing. There is only so much detail that you can get out of small sensors, after that you get a lot of noise and/or extrapolated data.

    • This is undoubtedly true. You can accomplish a great deal by simply using finer pixels and more sensitive detectors. It will not _match_ the behavior of human vision, which has excellent edge detection which is key to its features. And to many optical illusions. The very effecctive color detection in different lighting and different backrounsds amazes me, especially as I get older and hav lost some visual acuity.

    • Yeah, but only because they've gone from crap to slightly less crap. Admittedly I'm biased as a DSLR user, but for shooting human subjects I can immediately spot a cellphone sensor shot (at 100% crop), and it's painful to see a carefully set-up subject captured in such a bad-quality image. I'd say the best cellphone sensors today are where cheapish digital cameras were about 15 years ago in terms of the image quality they produce.
  • by Waffle Iron ( 339739 ) on Tuesday April 21, 2020 @09:43PM (#59974648)

    The human eye cheats on the specs. That "500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy. For example, right now I can't even decipher the third word to the left of my cursor if I keep looking at the cursor. The brain compensates for this by constantly shifting the gaze to the "most important" part of the current view.

    Since the camera can't do that, it's got to record everything just in case. So it needs lots more "pixels" than people get by with.

    • So, just random up an algorithm that determines the most important pixels in a field of view, and you're all set.

      • So, just random up an algorithm that determines the most important pixels in a field of view, and you're all set.

        For the typical Instagram "star" you don't even need an algorithm, there's only two areas you need to focus on.

      • by Kjella ( 173770 )

        The human eye bounces around a lot when we're studying something. Even in video you're better off using focus and motion blur rather than dynamic resolution.

    • "500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy.

      So you're saying that 640K (pixels) ought to enough for everyone?

      Joke, it's a joke. That being said, there's a good animal documentary that visualizes what different animals see -- it's not the "dot-focus" that we see, or perfect vision everywhere all-of-the-time . What a surprise, they tend to have better vision and motion detection in the general area of where their food appears. Outside that area, not so much. (That might explain why I can find the fridge so easily without my glasses.)

    • That's a good point. Also, that number of based on detecting the presence of a flashing light, not distinguishing detail.

      Think of how close you have to get to your screen in order to see the individual red, blue, and green sub-pixels. Your vision is filled with only a few thousand pixels by the time you can distinguish one from another, not even 1MP.

      When distinguishing a pattern of black and white lines vs an even gray color, 20/20 vision is something like 30 lines per degree in the high-res region, so 6,0

      • I'd counter by saying that the human eye is great at detecting distant details, better than any camera, except huge zoom cameras which mitigate that problem, with the downside of having a very narrow FoV.

        Example: if you look at the full moon with your naked eye, provided you have good vision in the first place, you will see its features (darker spots, whiter spots). If you take your phone out and try to take a picture of said full moon, you will get shit, even at maximum zoom.
        Same with birds. Look at a bird

        • Most camera manufacturers will be only to too glad to sell you a camera with the focusing mechanismss, and the supertelephoto lens necessary to capture birds in flight. It's a very expensive hobby.

        • by pjt33 ( 739471 )

          Do phones actually do zoom, or just crop? And even if they do do zoom, there's just not the physical space there for a focal length of more than a few millimetres, so of course they have a wider FoV than the eye.

          I think that what the moon example really illustrates is that the eye has good dynamic range. Sensors are getting better at that.

        • by ceoyoyo ( 59147 )

          As a previous commenter said, your eye cheats. It works a bit like a dual lens system, with a telephoto (high resolution / low FOV) lens and wide angle (low resolution / high FOV) lens acting together. Your brain sweeps that high resolution spot around and just makes up the rest, giving you the illusion of high resolution vision.

          • I am not sure how my eyes can "make up" moon spots which I then compare to images of the moon, and the same spots show up.

            • by ceoyoyo ( 59147 )

              That's not surprising. The illusions your brain creates are very convincing. You realize you have a big blind spot almost in the middle of your visual field right? And yet you've never seen it, and can only detect it through tricks.

              Try an experiment. Look at the moon and sketch it. After you're happy with your sketch, wait a day or two, THEN compare it to an actual photo. How similar is it? All those little details true to life? Now try sketching the star field around the moon, without taking your focus off

      • by Bengie ( 1121981 )
        We don't see in pixels, we don't see uniformly, our vision amplifies abnormalities and mutes the mundane. Then someone comes along as puts a few lines down, asks people to distinguish them and calls it a day. Probably the same person who figured 24hz is all we can see, yet professional gamers notice stuttering when their GPU drops from 244fps to 230fps on a 244hz monitor with gsync.

        I was reading a GPU blog, I think from AMD, and they were talking about VR. They said that the interesting thing about vision
        • They also notice stuttering when you "drop it down" 200 to 244, as long as you tell them you reduced it. Just saying.

    • by gTsiros ( 205624 )

      this should be a fucking GOLDMINE for 3d computer graphics

      imagine you only had to render a tiny portion of the scene at nominal resolution and the rest at 1/10 resolution

      holy shit this changes fucking EVERYTHING. Only problem is we need a very fast way to determine where my eyeballs are actually looking at

      • It's in the works for VR: https://www.cnet.com/news/eye-... [cnet.com]

        There's also a truly practical reason for eye tracking: it helps intelligently reduce graphics processing to only your fovea (called foveated rendering), creating an experience that looks the same, but taxes the processor a lot less. That means only the parts of the image you're looking directly at get the fullest resolution and image quality, everything else can save GPU cycles.

        Futher down the road, there's the possibility of resolving the "vergence-accommodation conflict". VR headsets currently have a fixed focal depth, which conflicts with the stereoscopic depth effect that VR uses. This is one magor reason for the headaches people get from VR. If you can see what each eye is looking at separately, you can work out the distance they are focusing at, and adjust the focal depth of the headset accordingly. https://www.foun [foundry.com]

    • The human eye cheats on the specs. That "500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy. For example, right now I can't even decipher the third word to the left of my cursor if I keep looking at the cursor. The brain compensates for this by constantly shifting the gaze to the "most important" part of the current view.

      Since the camera can't do that, it's got to record everything just in case. So it needs lots more "pixels" than people get by with.

      It's worse than that. The actual "resolution" of the eye isn't 500megapixel at all, but rather significantly lower resolution combined with a very powerful AI (well minus the A) that takes realtime video stream and infers a high resolution image from the result which is stored in temporary memory.

      Our eyes aren't that good. We just remember the awesome heavily processed image as being that good.

      Ironically this would have been Samsung's opportunity to use the marketing term "AI" for something relevant.

      • by ceoyoyo ( 59147 )

        It's even worse than that. The moderate resolution of the very small FOV fovea and low resolution of the rest of the retina is processed within an inch of it's life, then squeezed through the low-bandwidth optic nerve. Then it's processed well beyond an inch of it's life, with the brain basically making up most of what you think you see.

        The bandwidth of the optic nerve is estimated at between 6 and 10 megabits per second.

    • It is not just pixel counts. Its the dynamic range too. The minimum light needed to trigger minimal response on the sensor to the level at which the output saturates. That range is six order of mag for the retina and about 3 orders for a CCD. Till they tweak that somehow, it is going to be difficult. HDR software helps it some, but still needs improvement. But some of the night shots from the lowly cellphone camera are stunning, and there is progress here. But 600 MP = Retina is an over simplification.
    • ...the fovea, which is about 0.01% of the total field of view.

      No, you're off by at least a factor of 100.

      • The fovea is about 1 degree wide, out of about 100 degrees total. That's 1/100 in each linear direction; squaring that gives 1/10,000 total view area, or 0.01%.

  • by Anonymous Coward
    Seems that /. editors lost the ability to edit long, long ago...

    Samsung wants to rival the human eye with 600MP camera sensors [androidcentral.com]

  • And then those sensors can be used in the digital SLR's that take the pictures for cell phone companies like Huawei to use in their ads!

  • Making it smaller will only mean FAKE pixels. Pixels that technically exist but practically are too small to catch a notable amount of photons.

    Nobody's got a problem with large surface area. The eye isn't pinkie fingernail sized either.

    But hey, I know: Nowadays, is is always *imperative*, to choose the dumbest most apperance-only focused option physically possible, and shun and laugh at the sane option.
    Case in point: Real keyboards versus touchscreen virtual "keyboards" with comically too small "keys" with

  • Maetel: Look well Tetsuro, look well. The next time you see the Earth, it will be with machine eyes.

  • it is Samsung's inhouse software thing like google playstore, and samsung's Galaxy Store looks like a kid's candy shop, its like the entire purpose of samsung is to market to kids and teenage girls, i cant take samsung seriously anymore, and half the time you open galaxy store it has popups to video games, to me a phone is a tool, not a toy but if samsung wants to turn everyone's phones into toys then i am switching brands, samsung builds great hardware but the software side is sickening and pathetic, i wou
    • to me a phone is a tool, not a toy

      A phone is a general purpose computer. I've been using computers as tools and toys for almost as long as I've had access to them. Phones are no exception, and I don't really see why they should be.

      i would rather have just a stock barebones android with only operating system updates coming from samsung, and let me decide which applications to run on my phone, i wished somebody would step up and build smartphones for people that see their phones as an essential tool and not

    • a phone is a tool, not a toy

      I have lots of actual tools, for cutting metal and shaping wood. They're fun to use.

      More seriously, it's not that these things are marketed as toys. Phones have become tied into youth fashion. Part of what sells phones is how young people feel their peers will perceive them. Which I think is why grumpy grown ass men don't understand why phones aren't just phones.

      if samsung wants to turn everyone's phones into toys then i am switching brands

      If phones are not useful to you, and you are not entertained by them, then yes you shouldn't buy one. It seems clear that Samsung isn't targeting y

  • If it's so huge that it's nearly impossible to fit it into a phone, then I'd say it's pretty small: https://www.bing.com/images/se... [bing.com]
  • The human eyes aren't actually that good, and large part of our visual perception is due to the heavy processing of the realtime feed they give to the brain.

    How Samsung's marketing department missed the opportunity to talk about "AI assisted" sensor is beyond me given this is one of the few times such market actually makes sense.

  • Megapixels, yay!

    What about dynamic range? Try capturing the full moon with the clouds it lights... no can do with any standard camera, at least not if you don't use tricks like HDR.

    What about noise? Sensivity? Etc. There's many factors to a camera sensor other than pixels.

  • Biggest sensor ... really ... nope :

    https://icdn7.digitaltrends.co... [digitaltrends.com]

  • "600mp"

    This is stupid given it's put through compression and filters to remove wrinkles and fine details. Who cares how accurate the outline of a human body where the skin is replaced by a flesh-covered smooth balloon?

    Dilbert: "Yes, but it's theoretically nice."

  • "Pixel binning" is a term for smartphone cameras, where multiple pixels are used to create a single pixel in the actual image file. Even if a sensor has "108 megapixels" on paper, it combines 33 samples to make one actual pixel in the final image. A "108 MP" camera produces 12 megapixel images! And each of those "pixels" may even not be a full RGB pixel but only a monochrome pixel behind a Bayer filter [wikipedia.org], sampling only one component of Red, Green or Blue.

    It is as if you would measure the size of your penis by

    • If the industry all uses the same metric and we understand what that means, then I don't see the problem. It's sort of like arguing that Fahrenheit inflates the numbers versus Celsius because the numbers are bigger. Even though you can convert between them with a bit of arithmetic as you managed with your camera megapixels example.

      It is as if you would measure the size of your penis by coiling the measuring tape in a spiral around it: technically you would be measuring from root to tip, but the result wouldn't be exactly comparable, would it!?

      It is more like your measuring in centimetres and me assuming you're reporting in inches.

      It is just as when smartphone processors are hauled as "octa-core", when the arrangement is 2+6, and the two fast cores can't run at the same time as any of the six slow ones.

      SMP(symmetric multiprocessing) wasn't the original multiprocessor configuration for comput

  • by Pyramid ( 57001 ) on Wednesday April 22, 2020 @03:16PM (#59977340)

    You need optics that can take advantage of the sensor's resolution. No cellphone is ever going to have that.

Life is a whim of several billion cells to be you for a while.

Working...