Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Transportation Technology

Boeing To Make Key Change in 737 MAX Cockpit Software (wsj.com) 211

Boeing is making an extensive change to the flight-control system in the 737 MAX aircraft implicated in October's Lion Air crash in Indonesia, going beyond what many industry officials familiar with the discussions had anticipated. From a report: The change was in the works before a second plane of the same make crashed in Africa last weekend -- and comes as world-wide unease about the 737 MAX's safety grows. The change would mark a major shift from how Boeing originally designed a stall-prevention feature in the aircraft, which were first delivered to airlines in 2017. U.S. aviation regulators are expected to mandate the change by the end of April.

Boeing publicly released details about the planned 737 MAX software update late Monday [Editor's note: the link may be paywalled; alternative source]. A company spokesman confirmed the update would use multiple sensors, or data feeds, in MAX's stall-prevention system -- instead of the current reliance on a single sensor. The change was prompted by preliminary results from the Indonesian crash investigation indicating that erroneous data from a single sensor, which measures the angle of the plane's nose, caused the stall-prevention system to misfire. Then, a series of events put the aircraft into a dangerous dive.

This discussion has been archived. No new comments can be posted.

Boeing To Make Key Change in 737 MAX Cockpit Software

Comments Filter:
  • by ZorinLynx ( 31751 ) on Tuesday March 12, 2019 @03:35PM (#58262832) Homepage

    Why the hell wasn't this the case before?

    Aren't flight control systems supposed to be triple-redundant anyway? Everything I've read about them says they are; three systems and if there is incorrect data it uses the two that agree.

    • by lgw ( 121541 ) on Tuesday March 12, 2019 @03:40PM (#58262888) Journal

      I assume they're talking about the sensor behind the pitot hole here. Making that the only sensor, and non-redundant, is particularly questionable. It's well known that pitot holes are very easily thrown off: an insect building a nest inside it (or ice forming, or etc) will throw off the sensor enough to crash a plane, if it's all you rely on.

      • by geekmux ( 1040042 ) on Tuesday March 12, 2019 @03:44PM (#58262950)

        I assume they're talking about the sensor behind the pitot hole here. Making that the only sensor, and non-redundant, is particularly questionable. It's well known that pitot holes are very easily thrown off: an insect building a nest inside it (or ice forming, or etc) will throw off the sensor enough to crash a plane, if it's all you rely on.

        I would assume you're correct here, but it still begs the question as to why this sensor was non-redundant, and how that SPOF design ultimately got approved.

        • Why the hell wasn't this the case before?

          ...

          I assume they're talking about the sensor behind the pitot hole here. Making that the only sensor, and non-redundant, is particularly questionable.

          I would assume you're correct here, but it still begs the question as to why this sensor was non-redundant, and how that SPOF design ultimately got approved.

          I am baffled as to why, if the problem had been identified, the planes weren't grounded until the software fix was implemented.

          Alternate source:
          https://www.morningstar.com/ne... [morningstar.com]

          • Do we actually know for certain that this second crash has the same cause as the first? I have not see anything official. However... they weren't grounded the first time because the first crash was the result of a hardware failure and the pilots not properly responding to the failure. So, two failures. Note that I don't say it was the fault of the pilot. The issue was the documentation and training provided by Boeing was insufficient to enable the pilots to identify and respond to the original failure in a
            • They changed the system acronym (much has been made of this), but the response to 'runaway trim' remained throwing the same two switches. That's been on the troubleshooting checklist of Boeing airplanes forever.

              The second crash was reportedly trailing fire before impact. Eyewitness, so take it with a huge grain of salt.

              Also the copilot is reported to have 200 hours? 200 hours total and flying a multi engine commercial jet? I think it has to be 200 hours in the type.

          • by uncqual ( 836337 ) on Tuesday March 12, 2019 @06:38PM (#58264306)

            My lay person's understanding...

            In order to increase fuel efficiency on the 737 MAX, the engine fan diameter was increased. These "underwing" engines would have been too close to the ground if mounted as on other 737 models. Thus, the engineers moved the engines forward and upward to achieve necessary ground clearance. This, along with some other changes, moved the force of thrust forward which made the plane more prone to lift its nose too high and stall. To guard against this, Boeing introduced the Maneuvering Characteristics Augmentation System (MCAS) which activates automatically when the autopilot is off in some conditions which include when the angle of attack (AOA) is too high. The MCAS system, when needed, attempts to prevent a stall by adjusting the horizontal stabilizer trim upward and will do this over, I believe, about 10 seconds or until the pilot overrides it or the angle of attack is within limits. If the pilot activates the trim control switch on the yoke, MCAS will be disabled -- but, five seconds after the switch is released, MCAS will reengage if the conditions call for it (esp. AOA). When MCAS is altering the trim, the manual trim controls on each side of the center "console" will be spinning away and, if a pilot looks down, they will see that motion as there is a white stripe extending outward from the center in order to make the movement obvious.

            The best speculation I've heard about the Lion Air crash was that there was a problem with one of the AOA sensors. There are two such sensors - one on both side of the 737 Max.

            As in most crashes, due to the redundancy of systems and procedures, it's rarely one thing that causes a crash but rather a cascade of events.

            There had been problems with at least one of the AOAs on previous flights but maintenance attempts appear not to have solved the problem. So, first there was a failure of maintenance, but of course AOA sensors will fail from time to time, so one can't blame the crash on that failure.

            I've not heard how MCAS handled conflicting AOA sensor readings but I suspect this is one of the big areas of change that they will push in the April "patch". But, it's likely that the failing AOA caused the MCAS to activate when it shouldn't have and push the nose down by adjusting the trim - but this actually pushed the plane's nose down too far. When the pilots tried to correct, they ended up disabling MCAS (although perhaps not explicitly aware that they were doing so) only to have it start undoing what they had accomplished five seconds after they released the trim control on the yolk - and this was a vicious loop.

            Had the pilot recognized what was happening, they simply would have ran the "runaway trim" procedure (which would have disabled MCAS and some other automatic trim controls completely via a switch on the center "console") and flown the plane manually with no problems. Unfortunately, the pilots likely didn't figure out what was causing the problem and failed to execute the necessary procedure. So, that was a pilot error (and, that's probably what will be determined to be the main problem here, with contributing factors).

            There is much debate on why the Lion Air pilots may have failed to recognize what was going on. Many pilots and their union claim that they were not told about the existence of MCAS. Boeing hasn't been talking a lot, but they seem to assert that there was no need to train the pilots on MCAS beyond what the manuals/training did as it was a classic "runaway trim" scenario and the training was sufficient to cause the pilots to detect that case and initiate the proper procedure. Boeing did, however, issue documentation updates to operators worldwide soon after the Lion Air crash.

            After Boeing issued the documentation updates, every 737 MAX pilot should have been fully aware of MCAS and what to do if was doing the wrong thing. This, coupled with the witness reports that the Ethiopian Airline 737 MAX that crashed was spewing smoke and fire from the back of the plane a

            • by dgatwood ( 11270 )

              The best speculation I've heard about the Lion Air crash was that there was a problem with one of the AOA sensors. There are two such sensors - one on both side of the 737 Max.

              One problem is that, if I understand correctly, not all of the 737 aircraft have even so much as an indicator light when the two AOA sensors disagree. At least one airliner (Southwest) insisted on an explicit AOA indicator so you can see both AOA sensors' data and see how much they disagree. But if you don't have that and don't have the indicator light, all you know is that the aircraft keeps trimming the nose down every few seconds. One might still arguably call it pilot error to not recognize the sympt

            • Boeing added MCAS mechanism to helpfully(Clippy) push the nose down when a sensor detected risk of stalling. The mechanism was using single sensor (per side) , was not obvious to disable and kept interacting with the pilot. And marketing claimed costly retraining wasn't needed? Sounds like a major fuckup.
              This article from last year suggests the pilots were already pretty pissed off about the last incident: https://christinenegroni.com/7... [christinenegroni.com]

            • by Strider- ( 39683 )

              So, that was a pilot error (and, that's probably what will be determined to be the main problem here, with contributing factors).

              As someone who's a technical trainer (in a different transportation field, but still mission critical), this sounds to me like a design failure compounded by insufficient training, rather than pilot error. Training is incredibly important, but it also shouldn't be making up for poor design choices.

              • by uncqual ( 836337 )

                Boeing's contention seems to be "this looked like runaway trim", detect that and follow the "runaway trim procedure". If that's bears out, then I think it's primarily pilot error with, likely, significant contributing factors of design and/or training deficiencies.

                Obviously the design could have been "better" (else, why update the software - except for political reasons), but that doesn't mean it's necessarily a "failure".

            • I'll throw in a side issue here. The changes made to the 737 within a gnat's testicle of having to declare the 737 MAX a new model of plane. Using the old model number as helped. As such, they haven't had to do full testing to have it declared "flight ready" but rather use grandfathered flight approvals and just minor testing.

              The 737 MAX should have been a new model.

              As I said before, it would have new testing. Add to this new documentation, and new training for pilots. It simply would not have been possible

            • A plane that can turn an expert pilot into an "idiot" is not one I wish to fly in.

          • I am baffled as to why, if the problem had been identified, the planes weren't grounded until the software fix was implemented.

            On my keyboard, you press shift and 4 together.

        • Re: (Score:3, Informative)

          It raises the question, does not beg the question.
      • I was under the impression that each of the 3 redundant systems had their own set of sensors. Sounds like this particular system is not redundant and that they are now going to derive data from other sensors to compensate. I would feel a lot better if MCAS or any other system that takes control of flight also be triple redundant.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        This is what happens when you stop expecting your companies to compete on the free market and instead protect them with a combination of defacto and real terms state aid, such as trying to destroy competition such as Bombardier with illegal trade acts.

        As soon as you let your companies stop competing and instead give them a position of immunity, determine them too big to fail, and no longer deemed in need of competing on the free market, then they'll get lazy, they'll get incompetent, and shit like this will

    • Yes, this is absolutely bananas. Even the accelerator pedal position sensor on cars with throttle-by-wire is a pair of pots, not just one. If one sweeps smoothly and the other doesn't, the PCM throws a code and only listens to the smooth input.

      • by Shotgun ( 30919 ) on Tuesday March 12, 2019 @04:34PM (#58263342)

        The question here is why is the computer listening to a sensor instead of the pilot. A plane can be flown just fine without any instrumentation other than the front window. Why does that sensor get to override the pilot?

        • The question here is why is the computer listening to a sensor instead of the pilot. A plane can be flown just fine without any instrumentation other than the front window. Why does that sensor get to override the pilot?

          A car can be driven just fine through no information but the window view and the butt dyno, but the [mandatory] ESP system will still start fucking with your brakes if the accelerometer says that you're yawing in a way that isn't called for by the steering angle sensor. The answer to the question of why is the same in both cases, assistive technologies. When everything is working correctly, the vehicle is much better than you are at figuring out what is happening. Normally, as has been pointed out several t

        • Only a tiny general aviation aircraft can be flown that way. And even then it can only be flown that way at low altitude and full visibility because the vestibular system doesn't work correctly during flight. An airliner must be flown using instruments.

          https://en.m.wikipedia.org/wik... [wikipedia.org]

          And don't delude yourselves that you are special and would be able to feel your position correctly.

          • by Strider- ( 39683 )

            Well, airliners should be flown with instruments. But sometimes you can't, such as what happened with the Gimli Glider (a 767 that ran out of fuel at altitude). The only instruments they had left after the fuel ran out were the pneumatic airspeed indicators, and the barometric altimeter, both of which are purely mechanical devices. The pilot landed it safely, and the aircraft spent another 30 years in revenue service after being refueled (and minor repairs due to a collapsed nose wheel).

            • There had to be more backup instruments powered by the ram air turbine or the batteries than just two. At the very least a compass, an artificial horizon and a bank and turn indicator.

      • by uncqual ( 836337 )

        There are two AOA sensors on the 737 MAX - one on each side. The erroneous one may give a rational, yet wrong, signal. However, I suspect that the Boeing "patch" will add cross checking and perhaps more explicit alerts to the pilots when something seems "off".

    • by bobbied ( 2522392 ) on Tuesday March 12, 2019 @04:21PM (#58263234)

      Why the hell wasn't this the case before?

      Aren't flight control systems supposed to be triple-redundant anyway? Everything I've read about them says they are; three systems and if there is incorrect data it uses the two that agree.

      Well.. I believe the way the system works allows the control inputs of the pilots are able to overcome anything the system does. It's basically like an autopilot, where the pilot can override the system by applying pressure to the controls. This system is designed to apply backpressure as the aircraft approaches a stall, making it harder for the pilot to continue to increase the angle of attack and hopefully avoiding the stall. So you can still stall the aircraft, just pull harder and keep increasing the AOA...

      The problem though, is that pilots are conditioned to change the trim to deal with unusual pressures for the desired pitch angle. So if the system believes the sensor and it's saying "STALL" but you are actually not, the system applies pressure to lower the nose, which the pilots will be conditioned to trim out. IF the stall doesn't go away, the system keeps the pressure there and unless the pilots realize what's going on they will keep trimming nose up. Eventually, the process ends up with an aircraft that's severely out of pitch trim which will be very confusing to the pilots, with really high control pressures required to do anything to the pitch. Thus "control problems" seems to describe exactly what I imagine was going on. It was a vicious cycle that makes the aircraft really hard to control.

      So, I understand the engineering and using one AOA sensor. Kind of makes sense... Hey, the pilots can just override this anyway, we are stopping them from actually stalling the aircraft, just making it harder to do. We've don't this before in fighter aircraft and other fly by wire systems w/o any problems. But I think there wasn't enough thought given to what happens when that sensor fails and if they can implement some cross checks between airspeed, rate of climb, rate of turn, they might be able to more gracefully fail the system and disable it, or at least not get into the vicious cycle that leads to a pitch trim issue.

      • by caseih ( 160668 ) on Tuesday March 12, 2019 @08:27PM (#58264894)

        The MCAS spins the same trim knobs that the pilot spins. So the pilot can trim the nose back and after MCAS spins it down. They might fight each other, but ultimately they are both adjusting (and potentially undoing) the same thing. I'm sure it's initially confusing to pilots for sure, especially because older planes would cancel the automatic trims when the stick was pulled on, but apparently this is not the case with MCAS. If it turns out the MCAS contributed to the Indonesian crash, then it was a matter of training. But Boeing screwed up the design.

        • I think that pulling the stick back disables it - but only temporarily. It waits till the pilots think the problem has gone away, then starts shoving the nose down again.

          Why anyone could think this is better is beyond me.

    • by ceoyoyo ( 59147 )

      Technically this system is supposed to assist the pilots in avoiding or recovering from a stall, compensating for poorer aerodynamics on the MAX. It might have gotten approved without the normal redundancies because it only assists the pilot. I know a few other industries where that excuse flies....

    • by Solandri ( 704621 ) on Tuesday March 12, 2019 @04:34PM (#58263336)
      Usually there are 3+ pitot tubes. Looks like the 737 has 5 [stackexchange.com], with 3 of them dedicated to measuring airspeed. It's incredibly rare that a single fault causes a crash. Reporters just like to write up their stories that way to give their stories more impact, even if it twists the truth.

      This isn't the first time faulty airspeed readings led to a flight computer [wikipedia.org] has led to a crash. It isn't even the second time [wikipedia.org]. In all previous cases, the plane was flyable. It was the confusion as the pilots tried to diagnose the problem based on the bizarre behavior of the plane and the flight control software and alarms which doomed the flights. It requires a deep and thorough understanding of when different flight protection modes in the software are triggered and kick in, to work backwards from the behavior you're seeing, to what problem(s) could be triggering those modes. If you've debugged software, you've encountered this. Unlike natural laws like physics, software can be designed arbitrarily. So your intuitive feel for how things should work becomes useless for tracking down the problem. You're totally dependent on how thoroughly you understand the software's arbitrary design.

      Bear in mind that the stall warning is pretty much a "you're gonna die if you ignore me" warning. So it takes quite a bit of convincing before pilots will decide it's the warning that's faulty, not something else that they're doing wrong. That may be the cause of the reluctance of pilots to simply shut it off and fly the plane "by the seat of their pants" based on how the throttle settings, altitude, and attitude. So while theoretically the stall warning triggering incorrectly is a recoverable problem, it may take pilots a long time to diagnose and clear up the problem. Long enough for the plane to crash.
      • Okay lets suppose that some or all of the stall sensors are malfunctioning. There's another sensor that the computer can look at and that's the altitude. If the ALTITUDE is rapidly falling of course the plane might think, see I was right about this stall! But there's one more thing. Namely if the pilots pulled the stick back and the altitude stops falling the plane should now have enough information to figure out that pushing the stick forward is not the right thing to do.

        So it seems like the plane shou

        • by BostonPilot ( 671668 ) on Tuesday March 12, 2019 @05:36PM (#58263904)

          No, you're trying to grossly oversimplify the problem, and it's causing you to say things that are silly.

          Having worked as a vendor to the avionics group at Boeing, and having had a student who wrote test code for the 777, I can tell you that the testing / verification process for their software is mind boggling. They've had decades to fine tune their processes for creating reliable computer software. Believe me, you sound idiotic second guessing them, and it doesn't sound like you're a pilot either...

          The one thing I will agree with you about is that the system should trust the crew. However, I must say that some of my airline captain buddies would strongly disagree with that. Just look at Air France Flight 447 as a perfect example of why trusting the crew can go wrong. However, I still lean towards this... if you don't trust the crew then it's like the old joke about the perfect crew:

          The ideal flight crew is a pilot and a dog.

          The pilot is there to feed the dog, and the dog is there to bite the pilot if he touches anything.

          Seriously, if the automation is so complicated and opaque that the crew can't tell what it's doing and why... that's a problem. The move towards more automation seems to be to make up for an inexperienced crew... I think more training / sim time is the right solution, not more automation. Still, both Airbus and Boeing seem to think more automation is the right way to go.

          I'll be interested to hear what they learn from the FDR...

          • by dgatwood ( 11270 )

            Okay lets suppose that some or all of the stall sensors are malfunctioning. There's another sensor that the computer can look at and that's the altitude. If the ALTITUDE is rapidly falling of course the plane might think, see I was right about this stall! But there's one more thing. Namely if the pilots pulled the stick back and the altitude stops falling the plane should now have enough information to figure out that pushing the stick forward is not the right thing to do.

            No, you're trying to grossly over

          • by K. S. Kyosuke ( 729550 ) on Tuesday March 12, 2019 @07:20PM (#58264536)

            I can tell you that the testing / verification process for their software is mind boggling. They've had decades to fine tune their processes for creating reliable computer software.

            Haven't we had ample evidence by now that it's all too easy to make computer software that very reliably and very accurately does exactly the wrong thing?

            • I would say that the evidence is that software is incredibly difficult to get 100% right, so that it will do the correct thing under all circumstances. Companies like Boeing are incredibly good at the job, and yet even they get stuff wrong.

              What I'd like the average slashdot reader to understand is that it's bogus to think that there are simple answers to a lot of the issues. An incredible amount of thought goes into the process. I worked on avionics software and I can tell you that the average software engi

          • No i'm not oversimplifying or at at least you have not shown how. If the plane is dropping the plan knows this. If every time the pilot pulls back on the stick and overrides the automatic dive the plane goes up, the plane knows that too. So the plane has the info it needs to make a better decision. Show me how I'm wrong.

        • Stall warning happens before the plane loses altitude. The idea is not to lose altitude.

          The angle of attack is NOT the orientation of the plane. It is the orientation of the plane measured relative to the air flow direction. Without knowing the airflow you can not estimate it. That is why there is a sensor outside (basically a weather wane with instrumentation to measure its orientation with respect to the plane's axis).

          Again why they rely on a single sensor, why they did not make this critical sensor re

        • by twosat ( 1414337 )

          The opposite happened on XL Airways Flight 888, an acceptance flight for an Air New Zealand Airbus A320 going back to ANZ after a lease to XL Airways Germany. The Airbus A320's computers noticed the conflicting readings from the sensors and put the pitch trim into manual mode. The pilots didn't notice the warning on a screen and were relying on the flight computers to prevent the plane from stalling. https://www.youtube.com/watch?... [youtube.com]

    • by gweihir ( 88907 )

      Naa, that is the old thinking. The new thinking is that it must be as cheap as possible, profits must be maximized and if it goes wrong, blame the young end inexperienced engineers that did not have the guts to give management a clear "no". Also, do not tell the pilots about the crap engineering you put in there, they may refuse to fly that thing otherwise.

  • msmash: that alternative link has even less useful information than the truncated wsj article.

  • Obvious (Score:5, Funny)

    by Anonymous Coward on Tuesday March 12, 2019 @03:43PM (#58262928)

    if ( goingToCrash ) {
            dontCrash();
    }

  • by jfdavis668 ( 1414919 ) on Tuesday March 12, 2019 @04:03PM (#58263124)
    That would have been prevented by the current system.
    • by gweihir ( 88907 )

      Probably. The whole thing is a mess, these engines have no business being on that plane. Add an apparently completely incompetent belief that software can fix anything and you get a lot of dead people, all for profit optimization.

  • Additional sources (Score:5, Informative)

    by Anubis IV ( 1279820 ) on Tuesday March 12, 2019 @04:43PM (#58263426)

    Since the alternative source link in the summary appears to link to an article about stock prices, here's some alternative alternative links that actually contain more relevant information:
    - Boeing press release [mediaroom.com]
    - Gizmodo [gizmodo.com]
    - Washington Post [washingtonpost.com]

    • by nyet ( 19118 )

      The /. editors are trash. Betcha $100 they never fix the link in the summary. They're completely incompetent.

  • Are they going to enable pilots to disable the MCAS from nose diving the plane by pulling up on the yoke too?
    • by Strider- ( 39683 )

      It does, but as soon as they let go, the MCAS kicks in again, because it's still active, so if the pilot doesn't catch what's going on, they wind up fighting the aircraft all the way into the ground.

  • For a system that can kill the aircraft? That sounds like criminal negligence to me. Somebody wanted to do things on the cheap obviously, ignoring all rules of the design of critical systems. In particular, you never, ever rely on a single sensor, and you make damn sure the operators (pilots) understand how things work. About 300 killed people later, Boeing seems to have remembered at least some of the basics.

  • Admittedly I have not researched it but was stalling a big issue with these planes prior to implementing this anti-stall feature?

    Just seems like a solution in search of a problem which often does not end well.
    • Stalling is a huge issue. In Air France Flight 447 [wikipedia.org], pilots stalled a large Airbus, because they were used to the automated anti-stall system. With the system in place, if you pull back on the stick the plane goes up. The pitot tubes plugged briefly. The system went to a manual mode (alternate law) that the pilots were unfamiliar with. The pilots pulled up, put the plane into a stall, and crashed the plane. They did not understand why they were not gaining altitude.

      On average, it uses less fuel and is

  • by hcs_$reboot ( 1536101 ) on Tuesday March 12, 2019 @06:45PM (#58264344)
    Something struck me regarding latitudes: the Air Lion crash was 6 degrees South (Djakarta), the Ethiopian crash was 9 degrees North (Addis Ababa) ; both flights were close to the Equator (symmetrically). Could have something to do with sensors reliability.

"If it ain't broke, don't fix it." - Bert Lantz

Working...