Samsung Wants To Rival the Human Eye With 600MP Camera Sensors (androidcentral.com) 61
Babu Mohan writes via Android Central: In an editorial published on the company's website, Yongin Park, who heads the Sensor Business Team at Samsung's LSI division, has revealed that his team is working on a camera sensor that will be able to capture more detail than the human eye. As noted in the article, the human eye is said to match a resolution of around 500 megapixels. Samsung, however, is working on bringing a 600MP camera sensor to the market, which could be used in various fields such as smartphones, autonomous vehicles, drones, and IoT. As you would expect, however, it will take a long time for the company to actually launch a camera sensor with such a high resolution.
A 600MP sensor would be massive in size, making it nearly impossible to fit inside a modern smartphone. In order to shrink the sensor, Samsung will have to reduce the pixel size, which would require the use of pixel binning tech to ensure the smaller pixels don't result in dull pictures. Samsung's 108MP ISOCELL Bright HM1 sensor uses its proprietary 'Nonacell technology,' which boasts a 3x3 pixel structure. This allows nine 0.8um pixels to function as one large 2.4um pixel to deliver impressive low-light performance.
A 600MP sensor would be massive in size, making it nearly impossible to fit inside a modern smartphone. In order to shrink the sensor, Samsung will have to reduce the pixel size, which would require the use of pixel binning tech to ensure the smaller pixels don't result in dull pictures. Samsung's 108MP ISOCELL Bright HM1 sensor uses its proprietary 'Nonacell technology,' which boasts a 3x3 pixel structure. This allows nine 0.8um pixels to function as one large 2.4um pixel to deliver impressive low-light performance.
Re: (Score:3)
We've had that for a long while, but it is called "zoom-and-enhance", not just "zoom".
https://www.youtube.com/watch?v=Vxq9yj2pVWk
Re: (Score:2)
We have had it since 1880, take a look at an original print from a 16x20 glass negative. Large format photography has always had the "far better than real life" effect.
Re:Yea! More megapixels (Score:5, Interesting)
The end-game goal is to match the resolution needed for holography [wikipedia.org]. Roughly 2000 lines per mm. Which for 600 MP would be about a 12x12 mm sensor. When sensors reach that density, you can start to capture interference patterns instead of light projected from lenses. Meaning you'll be capturing 3D holograms instead of photos. (Obviously a lot of other things need to happen to capture a hologram, but these are the sensor requirements.)
So this drive to higher and higher sensor resolutions is not pointless.
Re: (Score:1)
Re: (Score:2)
12x12 mm sensor is fucking tiny and would produce images that look like shit.
Modern smartphones with sensors a fraction of the size disagree with you.
Re: (Score:2)
When sensors reach that density, you can start to capture interference patterns instead of light projected from lenses.
You can only capture the interference pattern if you create it first, meaning illuminate the scene with coherent light, then let the reflected light interfere with part of the beam (the reference).
Taking a holographic picture in daylight wouldn't work, you need laser light illumination and an extremely vibration-free setup, meaning the whole thing would be impractical.
It's probably more interesting to try to capture the phase of the light directly, or less difficult, the light field (see Lytro for example).
Re: (Score:1)
Back to a periscope zoom mechanism in a smartphone?
Re: (Score:2)
could be used in various fields such as smartphones, autonomous vehicles, drones, and IoT
Note the word could be used. Not really useful for any of those fields, but it could be, it could be.
Of that random shopping list, the only one where you could make a vague argument for it is drones for some forms of surveillance, but even than you want a long lens to capture the item of interest, not a massive image of the entire world for which you then need to pick out the 500 pixels you want to see.
Re: (Score:2)
Samsung and Huawei both have 105 MP cameras on their latest phones and they do produce noticeably sharper images even when downscaled to 16 MP. Other phones have to artificially sharpen the image a little but on the 105 MP ones they kind of go the other way; rather than adding artificial sharpness they select pixels when downscaling that maintain sharp edges.
Anyway it looks really good.
Re: (Score:2)
Re: (Score:2)
"Because the problem with today's cameras is definitely that the don't have enough megapixels. "
Exactly. Because of the masks we have to wear for the foreseeable future, face recognition doesn't work, so we need eye recognition,
We'll see (Score:2)
But the fact is, cellphone cameras have improved by an amazing amount over the last 10 or 15 years.
Re: (Score:3)
Gut reaction is, "you can't beat physics! It's all about light gathering and sensor size!!"
But the fact is, cellphone cameras have improved by an amazing amount over the last 10 or 15 years.
Yes, cell phone cameras have improved quite a bit, but the biggest advance has been photo processing. Don't let anyone fool you, it's still about light gathering and sensor size as the physics sets the limits, not marketing. There is only so much detail that you can get out of small sensors, after that you get a lot of noise and/or extrapolated data.
Re: (Score:2)
This is undoubtedly true. You can accomplish a great deal by simply using finer pixels and more sensitive detectors. It will not _match_ the behavior of human vision, which has excellent edge detection which is key to its features. And to many optical illusions. The very effecctive color detection in different lighting and different backrounsds amazes me, especially as I get older and hav lost some visual acuity.
Re: (Score:2)
Overstating things a bit (Score:5, Insightful)
The human eye cheats on the specs. That "500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy. For example, right now I can't even decipher the third word to the left of my cursor if I keep looking at the cursor. The brain compensates for this by constantly shifting the gaze to the "most important" part of the current view.
Since the camera can't do that, it's got to record everything just in case. So it needs lots more "pixels" than people get by with.
Re: (Score:1)
So, just random up an algorithm that determines the most important pixels in a field of view, and you're all set.
Re: (Score:2)
So, just random up an algorithm that determines the most important pixels in a field of view, and you're all set.
For the typical Instagram "star" you don't even need an algorithm, there's only two areas you need to focus on.
Re: (Score:2)
The human eye bounces around a lot when we're studying something. Even in video you're better off using focus and motion blur rather than dynamic resolution.
Re: (Score:2)
"500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy.
So you're saying that 640K (pixels) ought to enough for everyone?
Joke, it's a joke. That being said, there's a good animal documentary that visualizes what different animals see -- it's not the "dot-focus" that we see, or perfect vision everywhere all-of-the-time . What a surprise, they tend to have better vision and motion detection in the general area of where their food appears. Outside that area, not so much. (That might explain why I can find the fridge so easily without my glasses.)
And a lot more cheating than that (Score:2)
That's a good point. Also, that number of based on detecting the presence of a flashing light, not distinguishing detail.
Think of how close you have to get to your screen in order to see the individual red, blue, and green sub-pixels. Your vision is filled with only a few thousand pixels by the time you can distinguish one from another, not even 1MP.
When distinguishing a pattern of black and white lines vs an even gray color, 20/20 vision is something like 30 lines per degree in the high-res region, so 6,0
Re: (Score:2)
I'd counter by saying that the human eye is great at detecting distant details, better than any camera, except huge zoom cameras which mitigate that problem, with the downside of having a very narrow FoV.
Example: if you look at the full moon with your naked eye, provided you have good vision in the first place, you will see its features (darker spots, whiter spots). If you take your phone out and try to take a picture of said full moon, you will get shit, even at maximum zoom.
Same with birds. Look at a bird
Re: (Score:3)
Most camera manufacturers will be only to too glad to sell you a camera with the focusing mechanismss, and the supertelephoto lens necessary to capture birds in flight. It's a very expensive hobby.
Re: (Score:2)
Do phones actually do zoom, or just crop? And even if they do do zoom, there's just not the physical space there for a focal length of more than a few millimetres, so of course they have a wider FoV than the eye.
I think that what the moon example really illustrates is that the eye has good dynamic range. Sensors are getting better at that.
Re: (Score:2)
As a previous commenter said, your eye cheats. It works a bit like a dual lens system, with a telephoto (high resolution / low FOV) lens and wide angle (low resolution / high FOV) lens acting together. Your brain sweeps that high resolution spot around and just makes up the rest, giving you the illusion of high resolution vision.
Re: (Score:2)
I am not sure how my eyes can "make up" moon spots which I then compare to images of the moon, and the same spots show up.
Re: (Score:2)
That's not surprising. The illusions your brain creates are very convincing. You realize you have a big blind spot almost in the middle of your visual field right? And yet you've never seen it, and can only detect it through tricks.
Try an experiment. Look at the moon and sketch it. After you're happy with your sketch, wait a day or two, THEN compare it to an actual photo. How similar is it? All those little details true to life? Now try sketching the star field around the moon, without taking your focus off
Re: (Score:2)
I was reading a GPU blog, I think from AMD, and they were talking about VR. They said that the interesting thing about vision
Re: (Score:2)
They also notice stuttering when you "drop it down" 200 to 244, as long as you tell them you reduced it. Just saying.
Re: (Score:2)
this should be a fucking GOLDMINE for 3d computer graphics
imagine you only had to render a tiny portion of the scene at nominal resolution and the rest at 1/10 resolution
holy shit this changes fucking EVERYTHING. Only problem is we need a very fast way to determine where my eyeballs are actually looking at
Re: (Score:3)
It's in the works for VR: https://www.cnet.com/news/eye-... [cnet.com]
There's also a truly practical reason for eye tracking: it helps intelligently reduce graphics processing to only your fovea (called foveated rendering), creating an experience that looks the same, but taxes the processor a lot less. That means only the parts of the image you're looking directly at get the fullest resolution and image quality, everything else can save GPU cycles.
Futher down the road, there's the possibility of resolving the "vergence-accommodation conflict". VR headsets currently have a fixed focal depth, which conflicts with the stereoscopic depth effect that VR uses. This is one magor reason for the headaches people get from VR. If you can see what each eye is looking at separately, you can work out the distance they are focusing at, and adjust the focal depth of the headset accordingly. https://www.foun [foundry.com]
Re: (Score:2)
The human eye cheats on the specs. That "500 megapixel" resolution only applies to the fovea, which is about 0.01% of the total field of view. Everything else is more or less fuzzy. For example, right now I can't even decipher the third word to the left of my cursor if I keep looking at the cursor. The brain compensates for this by constantly shifting the gaze to the "most important" part of the current view.
Since the camera can't do that, it's got to record everything just in case. So it needs lots more "pixels" than people get by with.
It's worse than that. The actual "resolution" of the eye isn't 500megapixel at all, but rather significantly lower resolution combined with a very powerful AI (well minus the A) that takes realtime video stream and infers a high resolution image from the result which is stored in temporary memory.
Our eyes aren't that good. We just remember the awesome heavily processed image as being that good.
Ironically this would have been Samsung's opportunity to use the marketing term "AI" for something relevant.
Re: (Score:2)
It's even worse than that. The moderate resolution of the very small FOV fovea and low resolution of the rest of the retina is processed within an inch of it's life, then squeezed through the low-bandwidth optic nerve. Then it's processed well beyond an inch of it's life, with the brain basically making up most of what you think you see.
The bandwidth of the optic nerve is estimated at between 6 and 10 megabits per second.
Re: (Score:2)
Re: (Score:2)
...the fovea, which is about 0.01% of the total field of view.
No, you're off by at least a factor of 100.
Re: (Score:2)
The fovea is about 1 degree wide, out of about 100 degrees total. That's 1/100 in each linear direction; squaring that gives 1/10,000 total view area, or 0.01%.
Re: (Score:2)
Perhaps I was oversimplifying things when I used the term "fovea". The very center of the fovea, which is smaller than the entire thing, has the highest resolution. That central area has about 1 degree field of view.
The real androidcentral link (Score:1)
Samsung wants to rival the human eye with 600MP camera sensors [androidcentral.com]
Sure! (Score:2)
And then those sensors can be used in the digital SLR's that take the pictures for cell phone companies like Huawei to use in their ads!
Then keep it large! (Score:2)
Making it smaller will only mean FAKE pixels. Pixels that technically exist but practically are too small to catch a notable amount of photons.
Nobody's got a problem with large surface area. The eye isn't pinkie fingernail sized either.
But hey, I know: Nowadays, is is always *imperative*, to choose the dumbest most apperance-only focused option physically possible, and shun and laugh at the sane option.
Case in point: Real keyboards versus touchscreen virtual "keyboards" with comically too small "keys" with
Human eyes are garbage (Score:1)
Maetel: Look well Tetsuro, look well. The next time you see the Earth, it will be with machine eyes.
ever open the "Galaxy Store"? (Score:2)
Re: (Score:2)
to me a phone is a tool, not a toy
A phone is a general purpose computer. I've been using computers as tools and toys for almost as long as I've had access to them. Phones are no exception, and I don't really see why they should be.
i would rather have just a stock barebones android with only operating system updates coming from samsung, and let me decide which applications to run on my phone, i wished somebody would step up and build smartphones for people that see their phones as an essential tool and not
Re: (Score:1)
a phone is a tool, not a toy
I have lots of actual tools, for cutting metal and shaping wood. They're fun to use.
More seriously, it's not that these things are marketed as toys. Phones have become tied into youth fashion. Part of what sells phones is how young people feel their peers will perceive them. Which I think is why grumpy grown ass men don't understand why phones aren't just phones.
if samsung wants to turn everyone's phones into toys then i am switching brands
If phones are not useful to you, and you are not entertained by them, then yes you shouldn't buy one. It seems clear that Samsung isn't targeting y
Definition of huge (Score:2)
What a missed opportunity (Score:2)
The human eyes aren't actually that good, and large part of our visual perception is due to the heavy processing of the realtime feed they give to the brain.
How Samsung's marketing department missed the opportunity to talk about "AI assisted" sensor is beyond me given this is one of the few times such market actually makes sense.
dynamic range? (Score:2)
Megapixels, yay!
What about dynamic range? Try capturing the full moon with the clouds it lights... no can do with any standard camera, at least not if you don't use tricks like HDR.
What about noise? Sensivity? Etc. There's many factors to a camera sensor other than pixels.
Re: (Score:2)
What about diffraction limits [wikipedia.org]? That's one of the primary reasons that actual cameras (not the junk they stuff in phones) have big glass.
Tiny Sensor ... (Score:2)
Biggest sensor ... really ... nope :
https://icdn7.digitaltrends.co... [digitaltrends.com]
Puts the ball in balloon (Score:2)
"600mp"
This is stupid given it's put through compression and filters to remove wrinkles and fine details. Who cares how accurate the outline of a human body where the skin is replaced by a flesh-covered smooth balloon?
Dilbert: "Yes, but it's theoretically nice."
Pixel binning = inflated numbers (Score:2)
"Pixel binning" is a term for smartphone cameras, where multiple pixels are used to create a single pixel in the actual image file. Even if a sensor has "108 megapixels" on paper, it combines 33 samples to make one actual pixel in the final image. A "108 MP" camera produces 12 megapixel images! And each of those "pixels" may even not be a full RGB pixel but only a monochrome pixel behind a Bayer filter [wikipedia.org], sampling only one component of Red, Green or Blue.
It is as if you would measure the size of your penis by
Re: (Score:2)
If the industry all uses the same metric and we understand what that means, then I don't see the problem. It's sort of like arguing that Fahrenheit inflates the numbers versus Celsius because the numbers are bigger. Even though you can convert between them with a bit of arithmetic as you managed with your camera megapixels example.
It is as if you would measure the size of your penis by coiling the measuring tape in a spiral around it: technically you would be measuring from root to tip, but the result wouldn't be exactly comparable, would it!?
It is more like your measuring in centimetres and me assuming you're reporting in inches.
It is just as when smartphone processors are hauled as "octa-core", when the arrangement is 2+6, and the two fast cores can't run at the same time as any of the six slow ones.
SMP(symmetric multiprocessing) wasn't the original multiprocessor configuration for comput
What's the point? (Score:3)
You need optics that can take advantage of the sensor's resolution. No cellphone is ever going to have that.