Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

My guess is bad regression testing based on subjective qualifiers at best or incentivize poor results that promote as revenue at worst.


sort by: page size:

Regression testing.

They also call them regression testing

I think we can take that as evidence that regression testing is hard.

A/B testing gone horribly wrong is my guess.

Main reason it it tests unrealistically, I think. The other issues (doesnt catch bugs, hard to maintain) tend to flow from that.

The way the tests were written in that case, it was hard coded to how the work was being done and not the result produced. Both the code and the tests were bad.

Normally, I’d agree with you though.


The reason being tests/constraints specified by someone designing the software.

It's more than that. Optimizely fear that poorly delivering tests reflect badly on their other products.

In this sort of scenario, the bugs lie in the expectations themselves. Tests that don’t account for that are dead weight.

A 'bad test' isn't one that is failing, it's one that doesn't test anything useful, or worse tests something useful but incorrectly

maybe there wasn't any regression testing...

Most of those tests are generated. It's wrong to focus on this metric IMO.

Was that the point of the tests, or just a coincidence?

Now that they've failed once they've become regression tests (one of the most useful kind of test), but if you set out wanting to test the platform under you I think you'd want to do a lot more work than that.


The problem is that the metrics can be gamed, and there's incentive to do so.

Nobody is arguing against testing. They're arguing about doing tests where you know the results will be deeply flawed.


So much for regression tests.

The fact that the tests would be easy to fix makes it worse, no better. The problem the tests and the security hole was not that the tests indicated severe problems, but that all these simple failing tests masked the presence of a new, serious failure. If your test output is filled with junk due to failing tests because of minor bugs, it makes it much harder to notice when your tests uncover a major regression.

Can you elaborate a bit on how such testing is done, or share a good article on the topic? It sounds like a hard problem to need to get things right this bad, or else.

Do you simply mean regression testing?

Relying on tests is naive. Your tests can't cover every case. The article even mentions this -- their tests passed, but it failed in production.
next

Legal | privacy