Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I don’t know the paper, but this sounds like Goodhart’s Law - when you try to optimize a system by optimizing for a proxy for the outcome you want, the proxy measurement will stop being a good proxy because the system will be optimized around the proxy instead of whatever you actually wanted.


sort by: page size:

The point of the law is that virtually every measure is a proxy to some extent. Finding a measure that exactly matches what you really want to optimize is almost impossible.

I heard one of the guests on a recent Lex Fridman podcast remark that a professor friend of his was involved in some research where they proved that, for any given thing, if you optimize it then things improve up to a point, but if you continue to optimize for that one thing endlessly then it always gets worse in the end.

Does anyone know what paper he's referring to??


> The problem talked about in the article is that it's not easy to express your optimization goals in terms of concrete metrics,

Largely, because what you are wanting to optimize is something like current value of the future income stream of the firm. So a proxy is taken, which inevitably has bias, and you need to understand why hat that big is likely to be a continuously sanity check your results for reason to think that you may being caught in that bias. Which you probably can't measure well (though maybe something has improved since you adopted the proxy), because if you could you'd have a better proxy to start with.


When you optimize too much for a proxy the overall correlation that made you choose this proxy doesn't matter anymore, because at the limits you are dealing with details, not the main picture.

Ah, I understand what you mean. Well, if you know variable X is a proxy for Y, but Y is a lot longer to measure than X, then I guess it makes sense to optimize X knowing that it will increase Y (if you are sure X will bring Y).

If we presume that any algorithmic, procedural, or structural system built by one party can be reverse-engineered and understood by another party, the concept of Optimization by Proxy, and the more general Goodhart's law, form a pretty compelling argument against designing optimized systems as solutions to problems in general.

Maybe in some cases keeping a system convoluted and inconsistent can actually help ensure stability and durability?


Right, people optimize for what is measured against - aka Goodhart's law.

How can you optimize something that you can't measure? For any proxy measurements (e.g. how well you are sleeping), how can you ensure it's not something else in your life driving the effect?

It sounds like snake oil territory to me.


If you don't want to take the credit for it, you could call it the Pareto-Goodhart Law of Optimization, and state it something like this:

"The closer a system gets to an optimal state, the greater the propensity for the agents attempting to further optimize it to externalize the costs of the system."


Ah. So it’s a case of “we can sound like we’re doing geometric optimization, but really only partially optimize in an exploitable way”?

Thanks for the summary. I’m at work and won’t be able to review he actual paper until later.


Does anyone have a video or post that explains the optimization part for the original paper? I understand most of it but that part and can’t seem to wrap my head around it.

It's the converse of Goodhart's law -- if it's not measured/itemized, it's not part of the objective function for optimizing (costs).

Very good point. Seems like yet another example of how carefully optimizing all the individual parts of a system can paradoxically de-optimize the overall system.

I just fail to see how sensor data gives you an edge or (ultimately) a higher margin.

Is this about optimizing things to a precision that is impossible when humans try to decide whether something is too much or too little of something?


Incentivizing your staff to optimize a measure is always dangerous. You'd better be darn sure that that measure is a very good approximation of your utility function.

Thanks! Looks like good work. I hope this idea continues to get traction:

> In some sense, we’re already living in a world of misaligned optimizers

I understand this is an academic paper given to nuance and understatement, but for any drive-by readers, this is true in an extremely literal sense, with very real consequences.


The answer to the problem of optimization by proxy is not no optimization at all.

You optimize for what you actually measure, not what you wish you were measuring.

Of course; if you ignore the possibility of optimization, then the claim becomes almost a taoutology. In practice however, I notice this a lot less often than I notice the reverse.
next

Legal | privacy