Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Was the software designed only to detect tests? Or is detecting tests (and other cases where limiting performance has no adverse consequences) a side-effect of the system's intended behavior.

For instance, suppose the vehicle was programmed to shut off its engine when stopped at a stoplight. Suppose the test scenario measures emissions while stopped at a stoplight...



sort by: page size:

The thing is that detecting test situations is good, certainly useful, perhaps necessary.

E.g. detecting you are on a test stand, you might

* deactivate plausibility checks which would normally limit engine power * deactivate air bags and other inherently dangerous systems * ensure correct system behaviour even if sensors give conflicting information * suppress error logs for certain failed plausibility checks

The only illegal thing is switching to a "massaged" parameter map under these conditions. And presumably a single person controlled this switch.


The testing was in place, but the software was designed to detect the testing process. A simple hack would be something like: (Hood open)or(doors open)+(Throttle>0)=I'm being tested. That would be enough to hide from standard testing procedures.

Systems designed to defeat specific test are nothing new. The biggest example I can think of are the Buell (Harley) bikes that use an exhaust bypass designed to keep the bike quite at RPMs used during the CA test.


> There can be quite reasonable use cases for that information

What reasonable use case is there for a detection of the test cycle just from ECU information? If you're just testing stuff surely you have access to the ECU to just set a flag manually for anything you need?


The test protocol is a fairly specific schedule of engine power demands, is it not? It seems like one could look for any behavior outside of what is expected from the test, and then activate the 'defeat device.'

I've seen a whole bunch of theories about what the practical effect of turning on test mode all the time will do. From the engine is powerful enough that the net effect will just be lower mileage up to power, mileage and reliability will get worse and everything in between. I'm surprised the chip has not already been hacked, put into permanent test mode and the performance reported. Maybe soon.

Presumably that happens in production as well, and the test can determine that the system does the proper thing when that happens?

I used to work at an OEM that did this. When the VW Emissions Scandal was publicized, we removed almost every piece of code that did things like these from all our phones.

Not sure if it was ever reverted eventually but performance test might not matter to HN users but it is something that typical consumers do look at when making purchasing decisions, based on our research.


Thanks for the information. I think I'll give that a look on a test system, if only to see whether I care enough to bother.

Right but that may not preclude testing via auxiliary indicators. It could prompt other changes that you could test for, but, you're right, I'm assuming that's just not possible.

You seem to be emphasizing this, and I'd like to understand why.

Have you seen an engineer that "shut down" on some problem, and you believe it was due to the kind of situation for which you're now trying to test?


That’s a good point, would be curious to understand more what the testing setup is like for these kinds of systems.

Yeah absolutely but if you are running a simulation/test would you deliberately inject some random sensor failure if you are doing your tests for something else?

It is not clear what they were testing - perhaps they were indeed testing the MCAS system with sensor failures, but if so I probably wouldn't have expected such a surprised resction from them. It seemed like it was totally unexpected and unexplained, which is not a reaction I would expect if they were testing this.


In the automotive and aerospace world, of course tests in the real world cannot cover all cases (cost, time make it impossible).

Even most (not all, but almost none for meaningful applications) computer program cannot be proved to come to an halt [0], so complete testing is impossible by essence. We can only use more restrictive rules for programming but cannot formally guarantee anything.

As those systems are tied to the physical world, a whole lot of complexity is added by uncontrolled parameters.

Yet we love testing things. So a lot of techniques exist, such as SIL [1] and HIL [2].

So you could imagine using a real dashboard hooked up to a plane simulator. Which would enable testing the device in a wide array of conditions.

[0] https://en.wikipedia.org/wiki/Halting_problem [1] https://www.quora.com/What-is-Software-in-the-Loop-SIL [2] https://en.wikipedia.org/wiki/Hardware-in-the-loop_simulatio...


You can't test everything. There are so many independent variables, to try to test every possible scenario is a combinatorial nightmare. More so when you factor in transient events like engine unstarts and so forth.

Well, the problem here is also in a way with the EPA's testing methodology. I think you have to go into it thinking that someone will try to cheat so the tests need to be designed differently (eg; don't just accept whatever comes through the ODB2 port as true).

The article specifically addresses the 'test detection' and adjustment feature. See the quoted passage above.

I naturally assume that a lot of products with computers that are subject to government testing cheat in some way.

Betcha that some Energy Star appliances can detect when they are being run through a test cycle.


This sort of tests can be useful when you change things under the hood in such a way that the output shouldn’t have changed.

It is a difference in kind: VW dynamically detected and adapted behavior to the test. It would never operate in that way under normal conditions. The water heater example was completely static: it always behaved the same way, under test or not.
next

Legal | privacy