It does seem rather astonishing that they don't test these devices. It's not only fraud you need to rule out, but also genuine manufacturing errors, equipment wear and tear, etc.
The only explanation I could understand is that these were deliberate false positive machines (either for illegitimate profiling or parallel construction), but that doesn't seem to be how they functioned.
The whole point of testing is to give consumers data, so if the test can be gamed by companies like Samsung, it is a bad test.
There are so many TVs and they run so many hours a day the aggregate energy use is non-negligible.
Consumers should avoid products from firms that game tests such as these. Toyota, for example, was singled out in the fuel consumption report as a company that did not cheat on the tests.
If you have to pass some standardized tests, the engineers will do what they're supposed to: Meet the goals.
I wouldn't call it 'rigged tests', it's more like they optimize the system under test to meet the spec. It probably happens everywhere, not only in cars. The behavior of your fridge, AC or any other hardware is probably optimized in weird ways just to meet Energy Star etc.
For me it's basically summed up as "we didn't test turning the power off" and making sure things worked the way we planned.
Yes it is hard and very expensive to do these types of tests. And doing it regularly is even more $$$ and time.
As most customers we seem to be okay with a cheap price hidden behind a facade of "high availability" since I don't really want to pay for true HA. Because if I knew the real cost it would be too expensive.
That's roughly my understanding about how these devices are actually tested in labs.
Probably not something I'd tackle on my own, but it would be awesome if Consumer Reports or a major newspaper hired a lab to do this. With so many people working from home now, I would think it's an increasingly relevant issue.
Really good question, I think someone will get their hands dirty and find a way to see on which hardware it’s running. Maybe through time correlation, that 2 test are running on the same hardware and reverse the power of the Hardware.
Depending on how much insight you could get into the system, you could probably find out when it is doing “random” work and when it does actual work, since they will most likely look different in terms of eg. heat output, power consumption and what not.
The testing was in place, but the software was designed to detect the testing process. A simple hack would be something like: (Hood open)or(doors open)+(Throttle>0)=I'm being tested. That would be enough to hide from standard testing procedures.
Systems designed to defeat specific test are nothing new. The biggest example I can think of are the Buell (Harley) bikes that use an exhaust bypass designed to keep the bike quite at RPMs used during the CA test.
Was the software designed only to detect tests? Or is detecting tests (and other cases where limiting performance has no adverse consequences) a side-effect of the system's intended behavior.
For instance, suppose the vehicle was programmed to shut off its engine when stopped at a stoplight. Suppose the test scenario measures emissions while stopped at a stoplight...
Is there a reason they require device specific tests?
It's the "unknowns" not accounted for in the models that are the (potential) issue. If you have time read through this Ars Technica thread where a bunch of these questions are hashed out: http://arstechnica.com/civis/viewtopic.php?f=23&t=116333...
For those without the time, I'll highlight one post from page 5:
"Unintentional radiators.
The reason they ask you to power down ALL electronics, and not just turn off radios, is that the oscillators (the clock that runs all digital devices) on the circuit board can act as miniature radios, in and of themselves, via clock signals on circuit traces. They emit at the clock frequency, and in some cases many higher level harmonics of those frequencies.
As I've stated several times, it's actually pretty easy to mitigate against known frequencies and signalling techniques...it's the ones you don't know about that are the problem. EMI can be downright spooky.
As an example: I was once testing a medical device (for the Medical Device Directive) that was required to failsafe, since it would literally be touching a patient (it was a combination pulseoximeter and a few other things all rolled into one.
There were numerous tests for both emissions and immunity, and things were going along ducky until we noticed two separate failure modes that weren't considered failsafe. At the time, we were testing ESD immunity up to 20kV. It would pass one time, and not another. We thought maybe we had a bad unit, so we got a few more from the manufacturer. That entire week we kept trying to figure out how to make the failure repeatable, without luck. My coworker and I went in over the weekend, and could simply not make any of the units fail at all, with the exact same test.
That's what triggered our thought process...what else could be causing the issue during the work week but not on a weekend?
Other immunity tests! Turns out, a dozen or so meters away, a different device was undergoing a different test...that wasn't required for the MDD. It was a conducted immunity test (it may have been Electric Fast Transients, can't recall) but the actual test signal was leaking out of that lab, and into our lab, via AC lines in the building. Our chief engineer submitted a proposal to add whatever test it was to the MDD, but I don't know whatever became of that.
I've seen simple clocks inside electronic gear cause CPUs to go haywire...in effect a single system interfering with itself. I've also seen extremely low power, yet very high frequency harmonics invade and corrupt function of another device, several meters away."
Betcha that some Energy Star appliances can detect when they are being run through a test cycle.
reply