Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology Science

Satellite Loaded With AI For Self-Diagnosis 218

TheReckoning writes "NASA has loaded its E0-1 Satellite with Artificial Intelligence to diagnose on-board failures. The software 'works by comparing a computerized model of how the spacecraft's systems and software should perform against actual performance. If the spacecraft's behavior differs from the model, then the ... "reasoner" looks for the root cause of this difference and gives flight controllers several suggestions of what might have gone wrong.' Another NASA probe loaded with AI was Deep Space 1."
This discussion has been archived. No new comments can be posted.

Satellite Loaded With AI For Self-Diagnosis

Comments Filter:
  • by EvilSS ( 557649 ) on Sunday October 10, 2004 @06:59PM (#10488519)
    ..in all seriousness, what happens if the AI system malfunctions?
    • by fireman sam ( 662213 ) on Sunday October 10, 2004 @06:59PM (#10488522) Homepage Journal
      Don't switch me off Dave
    • Sorry Dave, I can't tell you.
    • Have you ever heard of something called Skynet?
    • Reminds me of Mostly Harmless by Douglas Adams. The robots retrieve the backup computer core for the spaceship to fix the cracked one, and fall through an unknown hole into space carrying it, as an asteroid hit the ship, leaving a big hole and a cracked computer core.

      Note, that probably could have been said better, but nothing can do proper justice to Douglas Adams but himself.
    • by Quasar1999 ( 520073 ) on Sunday October 10, 2004 @07:05PM (#10488562) Journal
      what happens if the AI malfunctions? then mission control will get a bunch of useless error reports... It's a stupid self diagnosis test... not AI... overhyped buzzword... It simply runs a simulation and tests its results against the actual ones, and generates a report... that's not AI... of course if it is, that would classify a lot of humans in the DMV as intelligent as well... ;)
      • I think the proper name for this would be like an automatic expert system. Not quite AI, but its ability to answer questions lumps it in that category (?)
      • by Christopher Thomas ( 11717 ) on Sunday October 10, 2004 @07:58PM (#10488857)
        It's a stupid self diagnosis test... not AI... overhyped buzzword... It simply runs a simulation and tests its results against the actual ones, and generates a report... that's not AI...

        It's an expert system, which is indeed AI. You're probably thinking of "strong AI", which is AI that can function as powerfully and flexibly as a human [and if that definition is vague, it's because nobody's nailed down something more solid that everyone agrees on].

        Expert system AIs have been around for a long time in a wide variety of fields. They are designed to handle a narrow range of tasks (like fault diagnosis, medical diagnosis, or playing chess) better or more quickly than a human could.
      • I agree. When I read all these articles about "AI", the description of the system doesn't even contain the essential word "learning". If all the system does is comparing two models, it is not learning. If it is not learning, it is not Intelligent. It doesn't even fall into the category of weak AI [wikipedia.org].
      • How the hell does an AI malfunction? Is it a mechanical hardware? That's like asking, what if an algorithm malfunctions. Somethings don't just malfunction, it may contain a logic error, the hardware may malfunction, but software doesn't malfunction, it always does what it is told to do, the way it is told to do so.

        • In space, software may malfunction because of a cosmic ray hitting a CPU or bit in memory in just the wrong location and flipping a bit. This is why there are usually three flight computers in a spacecraft: to detect and recover from these transient errors.
          • I've heard this cited as the reason that none of the newer smaller die proccess chips are certified for space flight. Last I heard they were using 386 and 486 class proccessors that were designed with die features which are much larger and resistant to this kind of interference.
      • by pz ( 113803 ) on Monday October 11, 2004 @01:09AM (#10490335) Journal
        what happens if the AI malfunctions? then mission control will get a bunch of useless error reports...

        [sarcasm] Yep, I'm positive that the hundreds if not thousands of PhD-level man hours that went into this part of the project didn't consider that. Yep, took that young whippersnapper Quasar1999 to think about it for a few mintues to evaluate and assess the entire effort and proclaim, "it's a stupid self diagnosis test." [/sarcasm]

        If one actually reads the referenced article, it sounds like LV2 is, in fact, something far more advanced than a "stupid self diagnosis test." Se.f-diagnosis tests are pretty straightforward and highly tuned to a specific architecture. I've written something like that to evaluate an experimental compiler, with statements like,

        define a=1;
        if (a+a eq 2) then print 'simple addition works'

        But LV2 is very differnt than that. Into LV2 (which, despite the hype in the article, does not need to be on-board) is built a generic model of satelite functionality customized to the particular device in question. When unexpected results are found, the diagnostic software can experiment on the model, asking questions like, "if, in the model, valve G34 is stuck open, does the model behavior match the current anomalous condition?" I'm sure it wouldn't be hard to write up a test script that could iteratively simulate a fault in one or more parts of the system until it found a handful of likely candidates. Given that there are thousands of components in a satelite, this surely can be done faster by a machine than by a human. Then, were we really trying to do something advanced, we might come up with a way of caching these results to guide future diagnoses and build up a set of experiences. Collect these experiences from different projects (since, if LV2 and its descendent software is widely adopted, the data are presumably in common form), and you can guide designs of future satelites to avoid common failure modes, or identify problematic components.

        Now, is that AI? Does it think? You probably wouldn't say so. Could it be an aid to ground-based support? You betcha. Is there a reason to disrespect the fine engineers at NASA by demeaning their efforts without giving fair due? I fail to see one.
    • by Black Parrot ( 19622 ) on Sunday October 10, 2004 @07:06PM (#10488570)


      > ..in all seriousness, what happens if the AI system malfunctions?

      There's another AI to monitor the sanity of the AI. And another AI to monitor the sanity of the AI that monitors the sanity of the AI. And another AI to monitor the sanity of the AI that monitors the sanity of the AI that monitors the sanity of the AI. And another AI...

    • Someone call up Will Smith, post haste!
    • This [tvtome.com] is what happens.
    • by Brett Buck ( 811747 ) on Sunday October 10, 2004 @07:42PM (#10488778)
      Bingo - you have hit that nail on the head. These sorts of systems in aerospace applications are absolutely notorious for detecting proper (but off-nominal) operation as a failure, and then going off and reconfiguring a bunch of stuff unnecessarily. Or diagnosing real problems incorrectly, and either not helping or making things worse, or much worse.

      Even more importantly, the testing associated with these systems is very expensive and time-consuming - which means they don't really test it very well at all.

      I've seen similar systems in action in real space flights - and for the most part, it just makes things worse. If you were to limit yourself to simple things you really could detect, it would work out fine for the most part. But the tendency is to make it try to be a magic fixit device for any problem that comes up.

      In one case, I saw such a system deploy an appendage in conditions that resulted in the spacecraft structure being severely damaged. In another, it reconfigured every spacecraft system to the redundant unit in response to a trivial problem - when all that would have been required would have been to wait 20 minutes, then correct the trivial problem.

      Brett
    • by jfonseca ( 203760 ) on Sunday October 10, 2004 @08:09PM (#10488921)
      Misleading title?

      Title says :

      Software enables satellite self-service in space

      Paragraph 6 says :

      If the EO-1 does not respond properly to ASE control, then LV2 detects the error, makes a diagnosis and radios its analysis to mission control at Goddard.

      Conclusion: It's not self-fixing. It beams an analysis down to mission control, the crew can then take measures based on this analysis.
      • The software is supposed to handle "reconfiguration". If a component fails, the software can direct the system to work around the failed component so that the vehicle is still operational. I'm pretty sure that the Livingstone model has to be written to know how to work around a failed component. But with a proper model, it should be quite capable of determining when a component is broken, work around it, and continue the mission without human intervention.

        Not so important for a satellite, but very importan
    • ..in all seriousness, what happens if the AI system malfunctions?


      The same thing that always happens, EvilSS... the satellite will turn evil and try to destroy humanity.

    • ..in all seriousness, what happens if the AI system malfunctions? something like

      while true { kill all humans }

    • ".in all seriousness, what happens if the AI system malfunctions?"

      Then the system will report that the damage report machine has been damaged.
    • Hopefully they've programmed it with a sufficient understanding of the difference between Metric and Standard measurements....
  • by NarrMaster ( 760073 ) <dfordyce@OPENBSDmix.wvu.edu minus bsd> on Sunday October 10, 2004 @07:01PM (#10488534)
    "Just a moment...... Just a moment.....
    I've just picked up a fault in the AE-35 Unit.
    Its going to go 100 percent failure within 72 hours."
  • Yay! (Score:2, Funny)

    by IronMagnus ( 777535 )
    Skynet is online!
  • Obligatory (Score:4, Funny)

    by ShallowThroat ( 667311 ) on Sunday October 10, 2004 @07:03PM (#10488550)
    Luke: What's Wrong, R2!?
    R2-D2: Bleep bloop bloop bleep!
  • by mind21_98 ( 18647 ) on Sunday October 10, 2004 @07:06PM (#10488569) Homepage Journal
    Wouldn't obvious failures (like the failure of a sensor) be detected by Mission Control without LV2? Or is LV2 more along the lines of a troubleshooter application for your computer, where you specify the problem and it gives you advice?
    • There is much sense to what you say here. After all, it is easier to write/ modify/ debug/model a bunch of things on planet earth than it is to do this with a little itty-bitty piece of software running in a probe.

      However, to do such modelling etc on the ground typically means pumping a hell of a lot of diagnostic trace stuff to earth for analysis. Likely more than can be accomodated on the link. For this reason, some AI stuff could help to identify the problems faster and allow further in-depth debugging o

    • When clippy will do? :)

      "Looks like you're trying to detect what's wrong. Would you like some help with this?"

      [ ] Yes
      [ ] No
      [ ] Animate!

  • by mveloso ( 325617 ) on Sunday October 10, 2004 @07:08PM (#10488586)
    when something goes wrong, the internal dialogue will go something like this::

    Management: why didn't the AI inform us of the problem?
    Contractors: oh, looks like we forgot to turn it on before launch. Sorry about that.
    Management: doh! Here's more money, don't do that again.
    Contractors: OK. We'll do something else wrong next time.
  • I am V-GER. (Score:3, Insightful)

    by philovivero ( 321158 ) on Sunday October 10, 2004 @07:09PM (#10488598) Homepage Journal
    Just wait until this thing collides with that other space probe that is programmed to eliminate organic entities off the outside of spaceships, and they combine their AI into a super-probe that is out to exterminate all life.

    Ooh! Scary!
    • Heh. You're thinking of Nomad?

      An earth exploration probe that collides and combines with an alien planetary survey probe programmed to sterilize soil samples; a program that mutates into "sterilize imperfection" (IE all life and ultimately itself). Somewhere along the way it aquires a 'perpetual' power source and becomes increadibly powerful.

      Yes, I'm afraid that I've seen that episode of Star Trek many times :)
  • by danwiz ( 538108 ) on Sunday October 10, 2004 @07:10PM (#10488605)
    It just seems to me that it would be better to install more sensors, data-gathering, and reporting capabilities and then leave the trouble-shooting to the people on the ground. Payload costs are expensive, so why put the diagnostic end in orbit?

    Given the same data and placed groundside, it could then it could be tuned and upgraded more easily.

    I recall how the Mars lander had problems and the ground team worked out a novel solution. I'll bet that they would have like to had extra information to work with, instead of an onboard AI.

    BTW - I can understand this approach better for a long-range craft, just not an orbital satellite.
    • BTW - I can understand this approach better for a long-range craft, just not an orbital satellite.

      Seems like testing it on something in orbit would be a good idea before sending it off on a long-range mission with much less tolerance for failure. Just my opinion, though.
    • I would guess that this particular instance is just one step on the path to self treatment of faults. Naturally, accurate diagnosis is a first step in effective treatment.

      As for the value of this particular system, someone may have decided that the data needed to run the simulation may take up too much bandwidth to transmit it back to Earth. Don't forget about blackout periods for craft orbiting the moon, Mars, etc and the long lags in comms. This increases the value of having it on board.

      -R

    • >It just seems to me that it would be better to install more sensors,
      >data-gathering, and reporting capabilities and then leave the
      >trouble-shooting to the people on the ground. Payload costs are
      >expensive, so why put the diagnostic end in orbit?
      Uh.. Because this is software control, and as you put it your self,
      payload(sensors, etc.) are expensive.
      And what about if a problem occurs that prevents sending diagnostics

      to ground ?
      • "gives flight controllers several suggestions of what might have gone wrong"

        The current design only offers suggestions and doesn't implement any solutions to the problem. The "fix" is still made by the people on the ground.

        >> And what about if a problem occurs that prevents sending diagnostics to ground ?

        Since the AI isn't autonomous, they'd (still) be screwed. I'm sure there are many functions that are handled onboard first, such as the simple statement if (batteryFull) then turnOffCharging(
    • they're short on bandwith.
  • Hoo boy (Score:3, Funny)

    by DrMrLordX ( 559371 ) on Sunday October 10, 2004 @07:11PM (#10488610)
    Let's just hope they don't call the AI "SHODAN". If so . . . uh, anyone here good with a lead pipe?
  • My god... (Score:3, Funny)

    by BluRBD!E ( 627484 ) on Sunday October 10, 2004 @07:14PM (#10488620)
    14 replies so far and over 50% include gratuitous skynet/HAL references... all we need now are some hot grits, soviet russia, natalie portman and the steps to profit!
  • by ravenspear ( 756059 ) on Sunday October 10, 2004 @07:14PM (#10488623)
    Will it see dead people?
  • by aicrules ( 819392 ) on Sunday October 10, 2004 @07:17PM (#10488632)
    Sure, any automated response can be called AI, but this doesn't impress me.

    If all an AI module can do is make objective suggestions, it's nothing more than a list of conditional statements. Whoopideedoo!

    I can run similar "AI" on my TI-85. And I could write it all from scratch in the time it takes for a launch vehicle to reach the stratosphere.

    The web servers of 10 years ago could "suggest" that an "Object may have moved", so is that artificial intelligence? I guess it's really, really dumb AI....
    • Actually, it appears to do more than analyze using conditional statements. Finding an error, calculating likelyhood of a problem, and things like that smell like fuzzy logic to me, slightly more intelligent than a decision tree.
      From the article, also:
      LV2 will decide the best way future missions with subsystem failures can continue and still achieve goals.
      That, especially, seems a little more intelligent. Though I do agree that calling this AI is jumping the gun - it'd be nothing more than a basic neural n
      • I worked on a project very similar to this at JPL a little over a year ago. You specify the entire system in a language. We then coverted the specification into a large conjunctive normal form (CNF) boolean statement. The CNF form was converted into something called decomposable negagation normal form (DNNF) which lets you find satisfiable interpretations in linear time with respect to the DNNF structure. You assign sensor readings and expected states to the terminals of the DNNF tree and then do SAT to fin
        • DNNF for diagnois - that's Adnan Darwiche. What was the biggest number of components you could diagnose in reasonable(?) time? If you could compile your model I reckon that you used strong fault models? What was this project, BTW?
          • We were using Adnan Darwiche's papers. I don't have any figures on component count vs diagnosis time. I did some porting from Lisp to C for DNNF structure evaluation (for some reason C is more trusted than Lisp on autonomous spacecraft), and worked on building models for interferometry telescopes.

            I could probably ping my manager and get some data on the model sizes if you are interested.
    • by Tablizer ( 95088 ) on Sunday October 10, 2004 @08:34PM (#10489031) Journal
      If all an AI module can do is make objective suggestions, it's nothing more than a list of conditional statements. Whoopideedoo!

      During the "AI bubble" of the 80's, somebody complained that "any product with IF statements these days is claiming to be AI" (paraphrased). The definition of AI still has no consensus. Who knows, maybe human thinking could be modeled via a database full of IF statements.
      • Who knows, maybe human thinking could be modeled via a database full of IF statements

        Considering that's mainly the method of logic everyone thinks by... yes... yes it can.

        Even for emotion.

        You can also think of it physically. Since our entire thought (let's not get into metaphysics) relies on the neuron connections in our brain, they all can be modeled as "IF" statements. "If this neuron fires, fire these ones as well."

        Basically it would be the ultimate spaghetti code.

        I guess the complainers will only
      • Who knows, maybe human thinking could be modeled via a database full of IF statements.

        It is my opinion that, for many people, the "database" is nowhere near as full as it should be.
    • I can run similar "AI" on my TI-85. And I could write it all from scratch in the time it takes for a launch vehicle to reach the stratosphere.

      Go on then.
  • by AvantLegion ( 595806 ) on Sunday October 10, 2004 @07:40PM (#10488766) Journal
    Caboose: "A-I. What's the A stand for?"

    Church: "Artificial."

    Caboose: "....... what's the..."

    Church: "Intelligence."

    Caboose: "Ooooohhhh what was the A again?"

    OK, so RvB hasn't been obligatory, but come on, Star Wars and Simpsons quotes are getting freaking old. Let's move on

    • The thing is, that was copied from the Simpsons:

      Homer: "VIP? What's the I stand for?"
      "Important"
      Homer: "Oh, ok. What's the P stand for?"
      "Person"
      Homer: "... what's the..."
      "Very"

  • by Performaman ( 735106 ) <Peterjones@@@gmail...com> on Sunday October 10, 2004 @07:42PM (#10488775)
    Mission Control: Okay, now tell the probe to begin taking pictures.
    Probe: Bite my shiny, metal ass!
    Mission Control: Damn!
  • by geremy ( 18495 ) on Sunday October 10, 2004 @07:43PM (#10488779)
    NASA has plenty of systems like this, though this is the first I have heard that is loaded onboard.

    They have similar programs for the Space Shuttle main engines that run on the ground. They were going to run them in the loop on the shuttle with a new box in the payload bay, but they decided against it. The box was going to have the capability to change certain engine parameters, but they figured it was too costly.
  • NASA log (Score:5, Funny)

    by neonstz ( 79215 ) * on Sunday October 10, 2004 @07:44PM (#10488787) Homepage
    NASA: Rotate 10 degrees
    E0-1: I'm sorry Houston, I'm afraid I can't do that.
  • And in a few hundred years a group of bounty hunters aboard the Bebop will discover this satellite controlling other satelites in the sky and carving patterns in the desert.
  • Well...if that's already done, I might live to see Deep Space Nine completed!
  • satelite: Hello NASA, communication module is not working.
    NASA:
    satelite: Hello NASA, communication module is not working.
    NASA:
    satelite: Hello NASA, communication module is not working.
    NASA: No error reports from the satelite, everything must be working.

  • It's not AI (Score:5, Informative)

    by photon317 ( 208409 ) on Sunday October 10, 2004 @08:52PM (#10489102)

    I wish the whole world would stop misusing the term. Just because AI researchers have failed for decades to make any significant progress towards true aritficial intelligence does not give them or the rest of the world license to water the term down and redefine the goals until it means virtually nothing.
  • Yes, this is interesting but can it sing Daisy?
  • It seems to me that there are two major problems.

    1. We are relying on NASA to come up with a model of how the thing is supposed to run. This is just giving NASA another chance to screw up their math or something.....Smart Move...
    2. Why have this software onboard? If all it's going to do is suggest fixes, run it from the ground. If it can only fix software, then why does it have to be on the actual craft. It seems to me that NASA can press the nuke russia button, and then if the satalite doesn't nuke ru
    • I am only a senior in high school.

      Aha!! *That's* why you sound so stupid.

      It seems to me that NASA can press the nuke russia button, and then if the satalite doesn't nuke russia, NASA knows there is a problem

      And who's going to be around to fix the problem if Russia bombs the %&@! out of you? Do you wait for your car's engine to lock up before changing the oil?

      Why have this software onboard? If all it's going to do is suggest fixes, run it from the ground.

      If you cared to read anything:

      "E0-1, laun

  • thick wit much? (Score:5, Informative)

    by MOMOCROME ( 207697 ) <momocrome&gmail,com> on Sunday October 10, 2004 @09:53PM (#10489468)
    The significance of this is quite substantial, despite the negative tone in the comments. This is a nuts and bolts implementation of Minsky-style strong A.I. and one of the first such systems to be put into production. Regardless of the limited domain it is more sophisticated than the mere self-diagnostic routines it is being compared to by the oh-so-knowledgable slashbot mindshare.

    This system boh models the external world for consideration, just like our sense of imagination, and processes that information for purposes of survival, just like our sense of self awareness.

    The great part of this is that it is being done by NASA, who are known for their lavish spending and attention to the entire system, particularly those low level details like the particulars of chip logic optimization, the shielding and structural stability, the operating environment &etc. This isn't meant to be a joke about bureaucracy and budget cuts, either: they have the top talent engineers in their stable despite all the politicking we hear about.

    From the decidedly negative tone in the comments, you'd think the tech-happy slashbots were actually opposed to such efforts. I think the real deal is that you guys are raised on sci-fi instead of science, and fail to grasp just how this is important. So what if it's not HAL9000 or Skynet? It might be a baby step, but it's a hell of a lot more than any of you are doing.

    • I still hold firm that this is not "AI", and shouldn't be called "AI". I personally think the definitive layman-readable works on the problems of AI are Hofstatder's infamous GEB and MMT books. And (again in my personal opinion, but of course I think I'm right) I will never consider any peice of software to be "AI" until it can prove at least some rudimentary capability to overcome the challenges noted in those books. For examples of some of these human-intelligence things: Always being able to quickly p
      • What you are trying to describe is some sort of conciousness, or more particularly sapience, which is our (human) flavor of conciousness. There are many other types of intelligence, artificial or not. Even those books you mention take pains to describe this, especially in the description of the ant colony problem from GEB.

        It would be a simple thing to crash an ant colony with recursion using only an eye-dropper full of the right pheremone(s). Does this mean the ant colony has no intelligence? Not at all. F
        • They just have a bug in the code, and your haughty dismissal of intelligent systems for lacking this capability smacks of some pretty cruel callousness.

          No, it smacks of not having thought through everything properly before making a comment, which is a common-enough problem here.

        • Without the benefits of what you're calling "sapience", AI is no more useful than a very complex algorithm. I challenge you to find any AI that exists today that cannot be replicated in functionality and coverage by a human-programmed software algorithm that doesn't pretend to be "AI".

          --Brandon
          • I'm of the opinion that even the much-vaunted conciousness of human beings can " ...be replicated in functionality and coverage by a human-programmed software algorithm", which sort of sidetracks the nature of your challenge.

            After all, your neurons don't just serve you all willy-nilly, just mashed together in a random heap. Each one of those brain cells is in a specific relationship with all their neighbors, and that relationship is the very essence of software.

            If the mindless forces of evolution can reac
    • I worked on a project very similar to this at JPL a little over a year ago. You specify the entire system in a language. We then coverted the specification into a large conjunctive normal form (CNF) boolean statement. The CNF form was converted into something called decomposable negagation normal form (DNNF) which lets you find satisfiable interpretations in linear time with respect to the DNNF structure. You assign sensor readings and expected states to the terminals of the DNNF tree and then do SAT to fin
  • Download (Score:5, Informative)

    by electricdream ( 413007 ) <altjeringa@gmail ... m minus language> on Sunday October 10, 2004 @10:15PM (#10489603) Homepage
    Some things failed to be mentioned. For those of you fighting about whether the system is AI or not you can download the software for yourself and argue about something more than conjecture...

    http://opensource.arc.nasa.gov/project.jsp?id=6

    Or if the code is to much to read, and there's alot of it. You can always go to the livingstone website.

    http://ic.arc.nasa.gov/projects/L2/doc/

    And yet another story on the same subject.

    http://ic.arc.nasa.gov/story.php?sid=193

    enjoy.
  • There have been a few comments about how this should be run on the ground, and it's a waste to be put on a satellite.

    Well, similar systems run on the ground now; we know how to do that. Why, in science, should we be content to continue doing things we know how to do? That may sound like a joke, but we need to do this in order to progress.

    This stuff, and future versions, will be essential for long range human missions, but it has to be tested now. The bugs should be worked out by running it close by.

    Th
  • ... we just need the AI from Pong. That other paddle was one smart mofo. That should cover a satillite pretty good. Just think, if a meteor comes towards it, it will instantly move to it and protect the earth... unless the satillite is destroyed. Then we're just screwed. Player 0 : Computer 1
    • ... we just need the AI from Pong. That other paddle was one smart mofo. That should cover a satillite pretty good. Just think, if a meteor comes towards it, it will instantly move to it and protect the earth... unless the satillite is destroyed. Then we're just screwed. Player 0 :

      IF ( ball_x > paddle_x)
      then paddle_x ++
      elif (ball_x paddle_x)
      then paddle_x --
      else become_self_aware()
  • Wouldn't the tax dollars be better spent if the AI were on a ground computer?
  • If the AI doesn't use its ability to actually excecute decisions just make suggestions then, its pretty damn piontless to be in the ship vs on the ground isn't it? The only possible benfit would be at times where its not possible to receive the whole history of whats going on, but if you dont trust the brain to make the right decision then what good is it to ask what it thinks if you can't verify its data?
  • by naoursla ( 99850 ) on Monday October 11, 2004 @12:54AM (#10490269) Homepage Journal
    I worked on a project similar to this for NASA's interferometry telescopes at JPL a summer ago.

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...