Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Government Space

Google Earth's New Satellites 118

Rambo Tribble writes "The BBC provides some insights into the next generation satellites being built for Google by contractor DigitalGlobe in Colorado. The resolution of these satellites' cameras is sufficient to resolve objects that are only 25cm wide. Unfortunately, the public will be allowed only half that image quality, the best being reserved for the U.S. military. 'The light comes in through a barrel structure, pointed at the Earth, and is bounced around by a series of mirrors, before being focused onto a CCD sensor. The big difference – apart from the size – between this and a typical handheld digital camera, is that the spacecraft will not just take snapshots but continuous images along thin strips of land or sea.'"
This discussion has been archived. No new comments can be posted.

Google Earth's New Satellites

Comments Filter:
  • by Anonymous Coward on Wednesday February 12, 2014 @05:26PM (#46232473)

    ITAR applying to satellites and space probes is a right pain in the ass for anyone actually trying to get useful work done with international assistance.

  • by schneidafunk ( 795759 ) on Wednesday February 12, 2014 @05:35PM (#46232561)

    After RTFA, it is clearly not owned by Google but by DigitalGlobe. Check out this tidbit: "The satellite will be able to point to particular areas of interest and is capable of seeing objects just 25cm (10 inches) across. However, DigitalGlobe can only sell these highest-resolution images to customers in the US government. "

  • by RocketSW ( 1447313 ) on Wednesday February 12, 2014 @05:54PM (#46232747)

    Digital Globe is not in the business of building satellites. Ball Aerospace is building the satellite for Digital Globe who will operate it. Digital Globe then sells/leases the imagery to Google.

  • Re:Resolution (Score:4, Informative)

    by maeka ( 518272 ) on Wednesday February 12, 2014 @06:12PM (#46232933) Journal

    Seems to me like the current pics have pixels thinner than 0.5 meters... I feel like I am missing something?

    In many (most?) developed western areas the images are from planes, not satellites. There is a great deal of high-res aerial photography on the open market and Google has used much.

    The development being discussed in the article will benefit outlying areas and places where having temporal density is useful.

  • by thomst ( 1640045 ) on Wednesday February 12, 2014 @06:29PM (#46233085) Homepage

    icebike conjectured:

    But it probably gets Google the sats it needs for free.

    If google can build it, but only the military can use the full resolution, it sounds like google is probably getting huge piles of money from the US Military.

    The summary is completely wrong (surprise!)

    Google is NOT building the satellite (note the singular) in question. It will merely be a customer of DigitalGlobe - one of many, including the US government.

    Not that the US goverment needs DigitalGlobe's images. After all, the NSA has a fleet of its own satellites with far better image resolution capability than the DigitalGlobe effort.

    Slushdot: come for the misleading summaries, stay for the uninformed commentary!

  • Re:Continuous Image (Score:5, Informative)

    by LoRdTAW ( 99712 ) on Wednesday February 12, 2014 @07:23PM (#46233563)

    Or better yet: a flatbed scanner.
    In a scanner you have a 1 dimensional array of sensors defining a pixel width. You then move the sensor along an axis repeatedly recording that data at regular intervals (distance or time). That motion gomes from a little rubber timing belt around two pulleys, one of which is a step motor, which drags the 1D sensor across the photo or page being scanned. The result is now a 2d array of pixels that is, drum roll please: a picture we can see. If you ever used a scanner you would notice that high resolution scans take much longer. This is because the sensor has to be moved more slowly in order to allow the scanner to properly process the large amount of data from the sensor and send it to the computer without needing large amounts of memory in the scanner. Lets do some math: a hypothetical scanner has a sensor with 300 pixels per inch, 8.5 inches wide (for letter sized paper) and capturing 24 bits of RGB color. You now have (300*8.5*24)/8 = 7650 bytes per sample. And if you sample at 300 evenly spaced points in one inch and you page length is 11 inches (again letter size) then you have 7650*300*11 = 25245000 or 25.25 megs of data for a 300x300 DPI 24 bit color scan.

    The same technology is used in slit cameras for industrial automation systems on conveyor lines or other areas of machine vision. The conveyor or linear movement is like the little belt in the flatbed scanner moving the object past the 1D sensor array. The cameras used are slit cameras that contain a 1D pixel array and using an encoder on the conveyor or timing, a computer can determine the speed at which to sample the array and write that line of data to a 2D array and voila, a picture appears. You can treat the image as a stream of pixel lines and write them to a file akin to a scrolling image. The interesting part is the images from that stream isn't a single instant in time (or freeze frame) like a photo from a 2D sensor but a picture of time elapsed from row to row of pixels. Its a picture of elapsed time. Or like an oscilloscope. But you have a 2D array of pixels vs time instead of signal amplitude vs time.

    But why a 1D array when we have 2D arrays in cameras already? The answer is twofold:
    -you can more effectively make a wider pixel array consisting of millions of pixels and remove the need to take a large, data intensive frame. You simply stream the 1D array and buffer it. You somewhat simplify the imaging process as you simply stream the sensor data to disk(or wherever) instead of freeze, write buffer to disk and then get ready to snap again.
    -pixels next to each other on a 2D sensor experience noise from each other. Ever zoom in on a picture taken with a cheap, high megapixel camera? Its looks like grey, fuzzy/blurry snow. That is the noise. So a 1D array has less noise as its a single row of pixels.

    The Google satellite is using the same technology and the benefits are enormous.

    And one more tidbit: those persistence of vision displays that uses a 1D array of spinning LED's to create images or text works the opposite of a slit camera. Instead of reading a sensor array, it writes to an array of LED's at regular intervals (say every degree of rotation at a constant speed) to produce an image. It does this so fast your eyes don't notice the array LEDs switching on and off.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...