Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Moon NASA

Safety Panel Has 'Great Concern' About NASA Plans To Test Moon Mission Software (arstechnica.com) 42

An anonymous reader quotes a report from Ars Technica: An independent panel that assesses the safety of NASA activities has raised serious questions about the space agency's plan to test flight software for its Moon missions. During a Thursday meeting of the Aerospace Safety Advisory Panel, one of its members, former NASA Flight Director Paul Hill, outlined the panel's concerns after speaking with managers for NASA's first three Artemis missions. This includes a test flight of the Space Launch System rocket and Orion spacecraft for Artemis I, and then human flights on the Artemis II and III missions. Hill said the safety panel was apprehensive about the lack of "end-to-end" testing of the software and hardware used during these missions, from launch through landing. Such comprehensive testing ensures that the flight software is compatible across different vehicles and in a number of different environments, including the turbulence of launch and maneuvers in space.

"The panel has great concern about the end-to-end integrated test capability plans, especially for flight software," Hill said. "There is no end-to-end integrated avionics and software test capability. Instead, multiple and separate labs, emulators, and simulations are being used to test subsets of the software." The safety panel also was struggling to understand why, apparently, NASA had not learned its lessons from the recent failed test flight of Boeing's Starliner spacecraft, Hill said. (Boeing is also the primary contractor for the Space Launch System rocket's core stage). Prior to a test flight of the Starliner crew capsule in December 2019, Boeing did not run integrated, end-to-end tests for the mission that was supposed to dock with the International Space Station. Instead of running a software test that encompassed the roughly 48-hour period from launch through docking to the station, Boeing broke the test into chunks. As a result, the spacecraft was nearly lost on two occasions and did not complete its primary objective of reaching the orbiting laboratory.

This discussion has been archived. No new comments can be posted.

Safety Panel Has 'Great Concern' About NASA Plans To Test Moon Mission Software

Comments Filter:
  • What about hardware testing don't need an a13 or challenger to happen again.

    • No testing was needed to predict the Challenger explosion, only a quick read of the specs.

      • by gweihir ( 88907 )

        No testing was needed to predict the Challenger explosion, only a quick read of the specs.

        Indeed. That those seals were _not_ able to withstand the cold was completely clear. And what would happen on a burn-through on the wrong side was just as completely clear. If they had had a decision making process back then that was not completely broken, that catastrophe would never have happened. Of course, such a process also needs somebody actually capable of actually making decisions, not a bunch of MBAs that have no clue how things work and do not even understand the statistics they base everything o

        • The Challenger decision making process worked fine for many hours as Morton Thiokol and other SRB engineers said it was unsafe to launch based on the rules. It was corporate managers and politicians that broke the process by overriding the rules.

          • by gweihir ( 88907 )

            The Challenger decision making process worked fine for many hours as Morton Thiokol and other SRB engineers said it was unsafe to launch based on the rules. It was corporate managers and politicians that broke the process by overriding the rules.

            The quality of a decision making process is determined by its outcome. If it starts find and then goes of the rails, it is fundamentally broken. And actually, this one was broken long before the managers and politicians got their hands in. The engineers never really explained what the actual risk was. A simple statement along the lines of "Probability of BOOM!: 30%" would have done wonders, but they never even managed to get close to it. One comment called it "Death by PowerPoint" back then, because the tec

        • by Agripa ( 139780 )

          Indeed. That those seals were _not_ able to withstand the cold was completely clear.

          No.

          https://www.nasa.gov/pdf/38204... [nasa.gov]

  • bought into something like "LabView will eliminate the need for programmers" and "we can do it in simulation"
    • by gweihir ( 88907 )

      bought into something like "LabView will eliminate the need for programmers" and "we can do it in simulation"

      Well, if the end product does not need to work, that may be an option. In anything that need to work, or worse, will control large flying bombs, that is the very last thing anybody sane would do.

      • All sorts of people bubble up into leadership positions. There's actually a bias in favor of people whose mouths write checks their asses can't cash.

        I once had the misfortune of working with an MBA type who also held a doctorate in chemistry. Her solution to the low energy conversion efficiency of solar panels that was making something or other unviable (irrigation pumps maybe, it was a long time ago) was to "put pressure on the engineers" to make it happen.

        Because engineers are lazy and complacent, and
        • by gweihir ( 88907 )

          Indeed. Concentrated stupid at work. I bet that guy had no good engineers left after a while.

  • Here's the deal. Big project means lots of different software design teams. Not easy to design integrated testing across all software teams and mission elements. Lots and lots of politics involved. Chunking up "end to end" testing placates those pesky folks in middle management positions. They get to maintain control over their personal, little feifdoms.

    Not saying that it's impossible to create effective, modular integrated testing plans. Just really, really hard to do it right. How do you account for the u

    • I understand what you're saying, but Boeing has really flubbed their testing lately. The problems with Starliner were pretty blah in nature. They grabbed the wrong timestamp from a date field and used that as the mission timer and in flight they discovered that their thrusters were mapped to the wrong ports. Basically the craft was going to head the wrong direction if they had fired it up as the software was written upon launch. They silently fixed that one on orbit.

      These SLS launches are going to be $2

      • I'm shocked at this point that NASA is trusting Boeing to pull this off at all. I mean, I get that there are huge budgetary ties and massive pork going down, but at some point there should be a line in the sand that says, "uh, nope. That's enough fail, thanks."

        I miss being a little kid and being able to day-dream about space exploration without having to realize how much political bullshit goes down in our pursuit of it.

      • A few unit tests won't break the budget.

        I'm not sure if you're trying to be funny here (in which case I'm clearly missing the punchline), but "a few unit tests" are not what we're talking about. That's what you write for individual functions or small modules. It's literally the opposite of that: extremely large-scale integration tests.

        I agree that the system should have been designed with this in mind from the start, as part of the core specs and operational requirements. But let's not trivialize the complexity and expense of these sorts of fu

      • by Agripa ( 139780 )

        These SLS launches are going to be $2 billion a pop. A few unit tests won't break the budget.

        A few unit tests *did* break the budget, so management removed them.

    • Not saying that it's impossible to create effective, modular integrated testing plans. Just really, really hard to do it right. How do you account for the unknown unknowns?

      No, it's not impossible. But there's no substitute for fully integrated end-to-end testing, especially when lives are on the line. If you're talking a desktop application or some cloud deployment then sure, chunk it and keep your fingers crossed. But for a manned space launch? Hell no - test it properly, and test it multiple times.

    • by gtall ( 79522 )

      Schmoe: Doctor, what is you plan for my brain surgery?

      Doctor: My dedicated team has worked in their specialties for years and each is the best at what they do.

      Schmoe: Have they ever worked together on a patient?

      Doctor: Not necessary. Besides, we'd then have to hire specialists in end-to-end brain surgery. We find that expensive and do not feel it is necessary.

      Schmoe: I feel it is necessary.

      Doctor: I'm sorry, who is the patient here?

  • by RobinH ( 124750 ) on Tuesday October 06, 2020 @09:50AM (#60577186) Homepage
    The engineers have been playing too much Kerbal Space Program... if it doesn't work just hit "Revert to Launch" and try again. :)
  • Have these morons learned absolutely _nothing_? Well, maybe they will produce some nice fireworks at least.

    • Have these morons learned absolutely _nothing_?

      For a long time I thought NASA was the only part of the Federal government that worked. And then NASA killed seven astronauts.

      And then NASA killed seven more astronauts.

      I no longer believe NASA is the only part of the Federal government that works.
  • by bobbied ( 2522392 ) on Tuesday October 06, 2020 @10:59AM (#60577370)

    This is a discussion about risk management for the whole system, not software testing.

    The problem here is that it is hugely expensive to do full end to end tests, where you've hooked up all the hardware in flight ready condition, can emulated every sensor, event, user input and possible inflight configuration as parts of the vehicle are dropped off and have a test specification that encompasses all the possible fault conditions and contingencies needed to actually verify the software does what's necessary. You have to design, build and verify this test stand, then build test scenarios to exercise the system and prove it works. This test stand would be more complex and more expensive than the flight vehicle by multiple times. (Yes I know how this works, I've designed, built and maintained avionics test equipment for fighter aircraft) Eventually, the costs and schedule impact of such a test stand exceeds the cost of a few test flights, especially for the transitory parts of the system, such as the booster. How much risk can you buy down doing this? How much risk are you wiling to accept? Certainly it's not zero risk, so the debate rages over what risks are acceptable, and which are not.

    The problem here is that the "test" conditions are many and varied and the goal here is for the software to not do anything "stupid" or "ill-advised" (i.e. have any undesired behavior) in the face of weird or unexpected input conditions. I don't think you necessarily need a full vehicle test bed to prove this. I also don't believe that a full vehicle test bed would necessarily get you better software, or even better assurance that the software is good to go.

    But risk management is not an easy task and testing software that is life critical is complex business. When a hard task meets a complex one, it gets very difficult to arbitrate. The vagaries of what constitutes a "good" software test strategy don't help here, the need for near absolute assurance of the best safety we can consistent with the design safety margins make this a judgment call. But in my view, it may be a very bad idea to have a full vehicle test bed and testing subassemblies may be perfectly acceptable, especially for the transitory parts of the vehicle. I'd likely press for a test bed for the parts of the vehicle that spend large amounts of mission time coupled together (i.e. the command and service modules), but emulating the booster components seems reasonable to test launch software. Likewise with the booster, just split off and emulate the crewed portion. Then you'd produce a tree of test stands, that get more and more specific to the module they are targeted to test, with emulation of the rest of the vehicle. That way, you build software, test at the lowest, smallest assembly possible and migrate it up in the test stand tree until you have a vehicle load that's fully tested and ready. No end to end is necessary if you do your simulators right with sufficient details in the scenarios the simulators support. It may not buy down your risks.

    I wonder what the formal software test specification requires here? Does anybody know?

    • How can software testing be too expensive for a project expected to cost $70,000,000,000? A few billion dollars drop in the bucket ought to cover a complex test stand. If making sure the seventy billion dollar investment actually works isn't one of the higher priorities of the project, especially after Boeing demonstrated that it probably won't work if you don't test it properly, then the priorities are wrong.

      • Let me rephrase this. To high means higher than other ways you could test the systems.

        IF the cost of producing the software test environment exceeds some threshold level, say the cost of a couple of launches, then you might find that it would be cheaper to actually prove your software by flying it unmanned a couple of times. My point is that it's not going to take very long with a significantly complex hardware/software system, to make it more cost effective to just fly a couple of missions. If you are se

    • the costs and schedule impact of such a test stand exceeds the cost of a few test flights

      Problem here is that there are hardly any "test flights" to speak of. Second launch this rocket is planned to take people on a Lunar flyby. This is stupid.

  • Is not the word that I would use with Boeing.

  • by RogueWarrior65 ( 678876 ) on Tuesday October 06, 2020 @11:06AM (#60577400)

    Clearly, there is something to be learned by comparing how SpaceX does things and how pretty much everyone else does things. SpaceX might not have beaten Blue Origin beat SpaceX on the first reusable landing but since then, SpaceX seems to be way ahead of everyone else particularly the stogy old aerospace companies with a "proven track record".

    • by ytene ( 4376651 )
      You raise an interesting point about the clear difference between the approach SpaceX takes [which is "Achieve mission objective first, but then perform any testing we can as well"] while Blue Origin has an approach of "Test, test, test until it works".

      But these are fair [equivalent comparisons]. When Jeff Bezos sent Elon Musk that "contragulatory" Tweet on December 22, 2015 ("Congrats @SpaceX on landing Falcon's sub-orbital booster stage. Welcome to the club!") he conveniently forgot that SpaceX had jus
    • Clearly, there is something to be learned by comparing how SpaceX does things

      Throw it on the launchpad and see if it explodes?

    • by Agripa ( 139780 )

      Clearly, there is something to be learned by comparing how SpaceX does things and how pretty much everyone else does things.

      There is no need to look to SpaceX. Instead look to how the Shuttle software was maintained; Feynman had great praise for that software development process.

      Oh, wait, the people who managed that were let go as non-essential and replaced with more important people.

  • Testing software is hard, and SLS is never going to fly anyway, so why bother? Even Charlie Bolden has admitted that - https://arstechnica.com/scienc... [arstechnica.com] From the start SLS was a pork project to give make-work jobs to former Shuttle contractors. So this doesn't matter.
    • Yeah, it sure seems that SLS is not intended to fly. It made a reasonable back-up path to commercial crew and that pathway, but now that has worked out, with Falcon Heavy to boot. So SLS is the (more expensive) second option that turned out to not be needed. Time to drop it.
  • NASA had not learned its lessons from the recent failed test flight of Boeing's Starliner spacecraft,

    Just stop reading at 'Boeing'. It's all you need to know.

  • A moon landing in 2024 is an extremely aggressive goal. Why was it picked?

    In December 2017, President Donald Trump signed Space Policy Directive 1, authorizing the lunar campaign. [wikipedia.org]

    Trump wants a moon landing while he is president. It's that simple. NASA accepted the challenge because they will take any funding they can get under any circumstances.

    These safety concerns are a classic symptom of overcommitment to project timeline goals. The Boeing Starliner software failure is a glaring example. The fact tha

  • Remember the first flight of the Ariane 5? It blew. Reason: they recycled sensors and software from the Ariane 4, but didn't test for the increased dynamics of the Ariane 5. Result: the value from a sensor overflowed, and things went south from there.

    End-to-end tests are there for a reason, else you'll get very expensive failures.

    • by Megane ( 129182 )
      And I remember Hubble. They also decided not to fully test to confirm that the mirror was up to spec. Fortunately it was precisely out of spec, and it could be "repaired" by giving up one of its four experiment module slots for corrective optics.
  • Margaret Hamilton facepalm

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...