Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

It is like with two AoA sensors: if they disagree, there is the problem.


sort by: page size:

If you see two AoA sensors that aren't in agreement then you can safely assume either one sensor has failed or physics itself has.

> You can't resolve a disagreement of two sensors.

Two sensors are still useful when all you need to know is if there's a disagreement.


AoE sensors may well be in disagreement even without malfunction: local turbulence, flying in somebody's wake, etc

I’m arguing against the idea that conflicting sensors is an issue. Clearly you can do no worse with sensors A and B than with just sensor A.

You might want to not have sensor B to lower the cost (as I suggested). Or you might want to have both sensors A and B because the provide some synergies (as gkop pointed out, better range of view, increased confidence, etc.). But it’s definitely not enough to just argue that the sensors may conflict.


> On technical level two sensors are clearly better than one even if you just pick one in case of disagreement

Can you explain this? If you always pick the same one in case of disagreement, what is the purpose of the other sensor? You're not getting any additional information when they agree.


That's the huge design flaw. They should have made the AoA sensors either critical and triply redundant, or not critical at all (as is typical).

> "more sensors good" is not always a good thing, when the sensors disagree with each other.

When they disagree, one of them is wrong. Would you rather have just the wrong sensor or both the wrong one and the good one as inputs to decide what to do?


The article also states that 2 sensors is not enough as then you have two sensors that can disagree with no way to figure out the correct one.

You can't avoid this problem by only having a single sensor modality, because you still have to decide if you trust your sensor data or not. In effect, you actually made the problem harder because you have no other information from any other sensor modalities to help understand the world around you.

In effect, they solved the disagreement problem by sticking their head in the sand and pretending their sensors are never wrong.


> but it seems to me that having more sensors should be superior.

What do you do when the sensors disagree?


Not having three AoA sensors seems either insane or criminally negligent.

I disagree. In an autonomous system, more sensors is almost always better, provided you have enough computing power to process the data. Having two sensors that disagree gives you much more information than if you had a single sensor with a malfunction. In the first case, you know there's some kind of fault and can take steps to fix it. In the second, you're running off bad data and who knows what could happen.

Three sensors are better than two—it's a common setup in flight control systems to have three identical computers running all calculations, and the majority opinion is taken as truth.


Sensors are obviously important, but it doesn't really matter if two sensors conflict, the important thing is which one is consistent with their running model.

So far their biggest problem is that they allow flicker. That's why sometimes the software picks the wrong lines. (Of course this is a very hard problem. Our brain conveniently smooths over sensory changes for us, because that's how our everyday reality is. Things don't flicker in and out of existence, nor does a car suddenly appear as a different thing, then switches back.)

And if it turns out the sensor(s) failed, it has to be able to handle that too.


I don’t see how “enhancing” and “conflict” are the same. As gkop pointed out, the two sensor types may have different ranges of view, or may agree about situations where just sensor A may be uncertain.

Three angle of attack sensors seems unnecessary to me. They simply need to make MCAS pay attention to both (2) and cut out if one disagrees with the other.

> The problem with sensor fusion is what do you do when the sensors disagree?

Like one sensor says you have a bus stopped in front of you and the other says it's all clear? And your choices are full steam ahead or prepare to not ram the apparent bus?


That leads to a consensus problem. If you have two sensors reporting different circumstances, which one do you trust? Do you err on the side of assuming the crash report is a false positive? Then you may fail to report a real crash when you really have a false negative (the non-crash report). For the feature to be useful, with only a pair of sensors and no reason to assume that the one is giving a false positive, you should err on the side of caution.

> reliable encounters with one sensor

There's no such thing. An anomaly that doesn't appear on multiple sensors is a defect with your sensor.


If that's true (that it alternates between the sensors each flight) then it's extremely obvious that a business decision was made to only use one sensor at a time.
next

Legal | privacy