I'd like you to inspect the issue and explain what happened and why (and start to fix that if that's not intended) rather than sharing what you think could have happened.
Unless you're not in position to do that, in which case it doesn't matter you're on the Copilot team (anyone can throw hypotheses like that).
Please also don't tell me we're at the point where we can't tell why AI works in a particular way and we cannot debug issues like this :-(
* There were two unusual events in very short order
* Someone quickly noticed and gave the order to go to ground stop
* The problem was figured out quickly and a work-around was developed
* Flights Resumed after successfully deploying the work around to the 'production process'
* A patch was quickly developed and deployed once the underlying bug was uncovered.
I think everyone (but perhaps the developers) comes out looking like a champ.
Now, do I think telling pilots what thrust they should use to take off is wise? I dont know - I'm not a pilot, and I'd want pilots feelings on the matter before commenting further, but it feels to me like a loss of authority that would make me uncomfortable - particularly if I was held to account for undetected failures.
The other part that I'd note here, is process is just as important as software could process alone have caught this failure without the tail strikes? possibly, and its something worth looking into further - but only if it doesn't add an unacceptable workload burden to pilots workload that could otherwise compromise safety further.
Well it seems that in this case the problem stopped immediatly after the captain said my aircraft. The problem if I understand the report is that before that he was applying inputs even though he wasn't supposed to fly.
I think that, not being a pilot, I can't really understand, but thank you again for the perspective.
Edit: one last thing though - when the plane isn’t behaving as expected, why isn’t the first thing to check that all the control surfaces and the throttle are where you expect them to be?
I don't really follow. How are you going to recommend changes which help the pilots make correct decisions if you don't say that this event occurred because these pilots didn't make a correct decision?
I agree with you that the incident is clearly attributable to pilot error, maybe even incompetence. However when searching for details I came across this: https://flightsafety.org/asw-article/stop-stalling/
I am convinced that much of human error is due to bad user interface, especially in technical settings. I think it is an important lesson that both pilots failed to recognize (in a stressful situation):
1. the plane was in alternate law
2. either actions contradicted each others
I am not a pilot, but here some very fundamental rules of flying a plane have been broken and from my point of view I can't classify this as a simple error.
The crucial job of the crew planning/taking off is to
a) calculate the decision points (speeds, distances) at which they have to decide to take off or not
b) monitor speed and progress along the runway
c) abort if they can't take off safely before the decision point
What they did suggests to me a complete breakdown of discipline and process in the cockpit.
The autopilot settings have absolutely nothing to do with it.
> I´ve seen very strange results during simulators when trying to solve emergencies, and all where very capable pilots. After all airplanes are very complex systems working in very different scenarios. Corrected procedures and system fixes are constant even on models that have been flying for 20+ years.
The initial behaviour of the plane around the time the transponders went out seems very possible to explain in terms of a less-than-perfect response to a sudden emergency. But how could one square a) making a series of direction and altitude changes over a longish period of time, well after the initial incident, b) no attempt to communicate at the same time and c) at most, limited damage to the plane's systems with d) no foul play? It seems that at least one of those has to give...
> I imagine that most air accident investigations begin like this; confused, competing information from numerous sources of varying reliability.
Not to mention that but when things happen on the plane, pilots are given confused, competing from numerous sources. This is why I assumed it was an accident (like Air France 447) at first.
> Give it some time, let the investigators work and report their findings. I'd be very surprised if it's not a combination of system failure and human error in reacting to the failure.
That was my first impulse too. Google "IEEE Automation Paradox Air France 447." However, this is really hard to square with the engine information. So you have three possibilities: the plane flew an uncommanded course on autopilot for 5 hours following an accident, the flight data is wrong, or it is a hijacking.
In this case an investigation is hard because there is so little information. It took until the black boxes were recovered from AF447 to determine what happened there. Here? We don't even know where the plane is, much less the relevant recorders.
"On 5 November 2014, Lufthansa Flight 1829, an Airbus A321 was flying from Bilbao to Munich when the aircraft, while on autopilot, lowered the nose into a descent reaching 4000 fpm. The uncommanded pitch-down was caused by two angle of attack sensors that were jammed in their positions, causing the fly by wire protection to believe the aircraft entered a stall while it climbed through FL310. The Alpha Protection activated, forcing the aircraft to pitch down, which could not be corrected even by full stick input. The crew disconnected the related Air Data Units and were able to recover the aircraft."
I was making an unfair inference about the pilot's mental state, so I removed it. With that being said I still strongly question the relation of the drone to the crash physically and believe the effect of the drone on the pilot's decision-making is the interesting part of this story. Landing as an evasive maneuver doesn't make a lot of sense to me and the actual cause of the crash seems to be a situational awareness issue.
The pilots were not making an 'absurd error'. They did not forget to fly the plane.
It is that what they thought they were doing was not what they were actually doing. Their mental model broke away from the reality. (They thought the plane was in 'normal mode' when it was actually in 'abnormal mode' -- an expression of which: one them declaring "this cannot be happening!")
Telling them just fly the airplane is not (quite) the solution -- they thought they were flying it.
But the remedy is sort-of basically correct. It just seems better expressed as: when nothing makes sense (your mental model has suddenly utterly failed), fall back to a more basic backup mental model and system -- like primitive manual override.
(How clear can such separation of a basic mode be? How practical would it be? How reliable? It leads to a set of engineering/UI questions . . . were they well designed in this particular case?)
> The fly-by-wire system malfunctioned and the pilots got confused.
The Wikipedia summary of the investigation report sounds quite a bit different.
It says there was an intermittent malfunction that could be cleared by following a procedure, which was done three times during the flight, with no impact on flight safety (AIUI). The fourth time, instead of following procedure, the pilot toggled the flight computer's circuit breaker, which he is not allowed to do in flight, which reset the flight computer completely, disabling various automated systems that they would now have to re-start, which they did not do. Then the plane entered a stall and due to communication issues pilot and co-pilot gave contradictory control inputs which resulted in no control input to the plane.
> However, pilot's mental model of the aircraft has been broken.
Exactly, and critically that "break" occurred during a very busy phase of flight, when they had a lot of stuff to think about even during "routine" operations. They didn't have a lot of attention to spare to notice the runaway trim condition.
Wasn't the one pilot pulling back fully on the stick and not communicating that he was doing that? And weren't the pilots aware that they were in a stall scenario? Which means that pulling back fully was the exact wrong thing to do?
My impression is basically that it was almost entirely the fault of the pilot that was pulling back the whole time. Of course, it sounds like Airbus' controls could provide _a lot_ more feedback to pilots, especially when there are divergent inputs.
You have to re-construct the flight before things go wrong to know when things went wrong. Maybe something you thought was right turned out to be an error given a different parameter. Kinda like debugging I guess.
Thank you for your input.
I'd like you to inspect the issue and explain what happened and why (and start to fix that if that's not intended) rather than sharing what you think could have happened.
Unless you're not in position to do that, in which case it doesn't matter you're on the Copilot team (anyone can throw hypotheses like that).
Please also don't tell me we're at the point where we can't tell why AI works in a particular way and we cannot debug issues like this :-(
reply