Planet Labs Tests AI-Powered Object Detection On Satellite 39
BrianFagioli writes: Artificial intelligence has now run directly on a satellite in orbit. A spacecraft about 500km above Earth captured an image of an airport and then immediately ran an onboard AI model to detect airplanes in the photo. Instead of acting like a simple camera in space that sends raw data back to Earth for later analysis, the satellite performed the computation itself while still in orbit.
The system used an NVIDIA Jetson Orin module to run the object detection model moments after the image was taken. Traditionally, Earth observation satellites capture images and transmit large datasets to ground stations where computers process them hours later. Running AI directly on the satellite could reduce that delay dramatically, allowing spacecraft to analyze events like disasters, infrastructure changes, or aircraft activity almost immediately. "This success is a glimpse into the future of what we call Planetary Intelligence at scale," said Kiruthika Devaraj, VP of Avionics & Spacecraft Technology. "By running AI at the edge on the NVIDIA Jetson platform, we can help reduce the time between 'seeing' a change on Earth and a customer 'acting' on it, while simultaneously minimizing downlink latency and cost. This shift toward integrated AI at the edge is a technological leap that can help differentiate solutions like Planet's Global Monitoring Service (GMS), providing valuable insights for our customers and enabling rapid response times when it matters most."
The system used an NVIDIA Jetson Orin module to run the object detection model moments after the image was taken. Traditionally, Earth observation satellites capture images and transmit large datasets to ground stations where computers process them hours later. Running AI directly on the satellite could reduce that delay dramatically, allowing spacecraft to analyze events like disasters, infrastructure changes, or aircraft activity almost immediately. "This success is a glimpse into the future of what we call Planetary Intelligence at scale," said Kiruthika Devaraj, VP of Avionics & Spacecraft Technology. "By running AI at the edge on the NVIDIA Jetson platform, we can help reduce the time between 'seeing' a change on Earth and a customer 'acting' on it, while simultaneously minimizing downlink latency and cost. This shift toward integrated AI at the edge is a technological leap that can help differentiate solutions like Planet's Global Monitoring Service (GMS), providing valuable insights for our customers and enabling rapid response times when it matters most."
Re: (Score:2)
What could go wrong.
We'll never find out because people like you won't let us try!
Re: (Score:2)
The problem with Reagan's Star Wars: no droids.
bent pipe (Score:3, Insightful)
for fecks sake
there is a reason why you dont do compute in space its dumb and however much you think there is power etc you still have to launch that weight up there
best option is to do all of this on earth and raw data transmitted is the best option
the ultimate is a passive system like a bent pipe
get over it
Re: (Score:2)
Local processing - whether your smartphone, smartwatch, or a satellite in orbit - all have utility in that:
(1) Data is private.
(2) Time/bandwidh not need to transmit data.
With local processing, one might get an answer faster because the transmitted result is such a smaller piece of data.
Re:bent pipe (Score:4)
With local processing, one might get an answer faster because the transmitted result is such a smaller piece of data.
Not only because the volume of data is smaller, but because the compute is local. Geosync RTT is 240–280 milliseconds. Earth-Moon RTT is 1250 ms. Earth-Mars is 6-40 *minutes*.
This insane take that compute shouldn't happen in space is only viable for LEO or latency insensitive use cases. We are going to put enormous amounts of compute in space, because we're going to need real-time compute far beyond LEO. johnjones is just an idiot spouting off in the interwebs and may be safely ignored.
Re: (Score:2)
What's the differential latency of running a strong model for several turns (or the equivalent) on a spacecraft's power budget compared to a data center's power budget, especially once you factor in redundancy to manage single-event upsets in the huge RAM array needed for that model?
I use Claude rather than a local model because I don't want to wait all afternoon for the quality of results I can fit into 128 GB RAM.
Re: (Score:2)
Provide actual requirements. Object detection doesn't require a data center or a rack of GPUs. Other things might.
Re: (Score:2)
But then you have to transmit potentially massive amounts of data back to Earth.
Say you want to detect aircraft entering airspace. They are difficult to detect with radar, so you want to do it optically. You need decent resolution to capture small drone sized ones, and you need multiple images to help with camouflage, false positives, and determining flight path.
That's a lot of data. The data rate is likely to be the limiting factor on what resolution and how frequently you can image an area. Being able to
It's the data (Score:2)
You're absolutely right, this exists to reduce the amount of data that has to be sent back to a ground station. Streaming endless gigabytes of data is slow and subject interference.
Performing the analysis in situ and sending back only the results is hugely appealing.
Re: (Score:2)
Say you want to detect aircraft entering airspace. They are difficult to detect with radar, so you want to do it optically. You need decent resolution to capture small drone sized ones, and you need multiple images to help with camouflage, false positives, and determining flight path.
That's a lot of data. The data rate is likely to be the limiting factor on what resolution and how frequently you can image an area. Being able to do the detection on the satellite, and only send reports or images that suggest further investigation is worthwhile, is going to be very useful.
That is unsafe engineering. A failure condition where the presumption is "All Clear" for aircraft entering an airspace.
Neither civilian or military can afford to assume that a space is clear unless told otherwise -the presumption must always be not safe until confirmed clear. The risk from a false negative is too high.
Re: (Score:2)
With such a system you can tolerate a lot of false positives, to ensure you don't get false negatives. All that happens is the false positive image gets sent to the ground for verification.
And remember that the other option is having nothing at all.
Re: (Score:2)
No, it's going to be the amount of processing power you've got. In the future you could put more up there but that has problems. You could also put more lasers on the satellite and have more bandwidth.
What you can't do is make light go faster. On-satellite analysis could be critical if you wanted to run a fairly simple algorithm on a fairly limited amount of data and detect a fairly obvious feature very quickly, and do something automated with that informati
Re: (Score:2)
for fecks sake
there is a reason why you dont do compute in space its dumb and however much you think there is power etc you still have to launch that weight up there
best option is to do all of this on earth and raw data transmitted is the best option
the ultimate is a passive system like a bent pipe
get over it
We're not talking GB200s. Orins are SoCs, with the top power draw of the biggest version at 75W. This is not a heavy or power-hungry system.
Not impressive, a Pre-ML 1990s PC doable problem (Score:3)
Instead of acting like a simple camera in space that sends raw data back to Earth for later analysis, the satellite performed the computation itself while still in orbit.
Detecting aircraft in a satellite image is something that human coded algorithmic computer vision, not machine learning, could do in the 1990s on a desktop PC. That a Jetson could handle a model recognizing aircraft is not surprising at all. It is a rather simple problem. Again, Pre-ML 1990s PC doable.
Smart phones and smart watches are already pioneering local ML processing. The machine learning models that can be run on the CPU inside an Apple Watch are impressive. Amazing onboard voice analysis.
Its cool satellites are doing this too, but there is nothing new or surprising here.
Re: (Score:2)
You're conflating concerns. Most government systems are required to log the hell out of their inputs and outputs. Making decisions to destroy something based on ephemeral data could happen just as easily on the ground as it could in orbit -- it has nothing to do with what kind of system (large neural network, traditional ML, human decision, or something else) makes the decision or where the decision happens.
Re: (Score:2)
And with the raw data in space, or non existent. There is no way anyone from earth can double check what was automatically targeted and destroyed.
Not necessarily. Perhaps the data is not needed during the decision making cycle. Getting the result, the answer, from processing that data be all that is needed.
The data can still be sent later when time is not critical. If nothing else its a good test case for the testing of the next release of the software.
Re: (Score:3)
Didn't they try to do that kind of image recognition in the 90s and find it unreliable? IIRC they tested it with tanks and found that rather that detecting tanks it was detecting sunny days, and once they eliminated the weather variations it couldn't do anything useful.
Today Tesla's vision system is notoriously unreliable, and you would assume that in military applications the aircraft are going to be camouflaged.
Re: (Score:2)
Didn't they try to do that kind of image recognition in the 90s and find it unreliable?
No. At least on the desktop PCs I referred to. Now in an embedded system onboard a missile or something, that could have been a too much in the 1990s.
IIRC they tested it with tanks and found that rather that detecting tanks ...
Airplanes at airports is a much easier problem. Airplanes are also more distinctive than tanks.
Today Tesla's vision system is notoriously unreliable, and you would assume that in military applications the aircraft are going to be camouflaged.
Real-time is a very different thing. Its not what was demo'ed.
Re: (Score:2)
Could the point of the brag be that it's newer hardware? My understanding is that successive generations of earthling chips are more vulnerable to malfunction from cosmic rays, etc due to their much higher density.
But I've totally not kept up. Is this still a problem?
Tech demo, RAD hardened CPU not needed (Score:2)
Could the point of the brag be that it's newer hardware? My understanding is that successive generations of earthling chips are more vulnerable to malfunction from cosmic rays, etc due to their much higher density. But I've totally not kept up. Is this still a problem?
This was a tech demo. What makes you think a tech demo needed a rad hardened CPU? I believe they added lot of shielding around an off-the-shelf CPU, as could be done with other recent hardware.
Missile, not satellite, probably more desired goal (Score:2)
Re: (Score:2)
Re: (Score:2)
Might be a proof of concept project, the real goal is getting the local ML processing onboard a missile.
Uh, that's not even hard.
Or a rod from god.
The way harder part is doing any meaningful sensing through the atmospheric disturbance at those speeds.
Re: (Score:2)
Might be a proof of concept project, the real goal is getting the local ML processing onboard a missile.
Uh, that's not even hard.
Other than the electronics surviving G's and vibration. Which is probably what the satellite tech demo is largely testing. Does it survive launch?
and it almost worked! (Score:1)
What's the point? (Score:2, Informative)
Re: (Score:2)
EXACTLY! They have so much of a bandwidth issue that they need old slow GPUs in space to compress data that much?? Sounds to me if you were going to replace the thing often with newer hardware it might make sense if you are worried about signal jamming and getting data quickly to some point on the ground that is portable and under powered... It might give you fractions of a second; but the real world physical actions are not as time critical as people tend to think.
So your missile response might fail to b
Re: (Score:1)
It's a systems engineering trade. It's been a while since I've worked on a space system, but...
You might bother because transmitting all that data down and waiting for a response takes more time than you have if you want to respond on the current orbit.
You might bother because the power cost of transmitting the data is on par with using stored energy on the platform to perform the computation locally.
You might bother because you want to be more efficient in your use of your communication link regarding the
Re: (Score:2)
AI is typically more efficient than the old hand-coded algorithms.
The reason you want to do this in space is that you want it to be autonomous. What might you want an autonomous satellite to do?
https://www.wearethemighty.com... [wearethemighty.com]
Bad idea (Score:2)
Re: (Score:2)
Sincerely,
Someone who actually read the summary.
Oh great! (Score:2)
License plate readers and red light cameras aren't enough, now we'll be surveilled from space!