That's an interesting approach that I do think can improve some things, but I think the underlying problem is that incentives need to be changed. It's not only metrics, but just giving honest opinions: what use-cases do you really think your algorithm is suited for, not looking at it in the most optimistic possible light? If academia weren't as ultra-competitive as it's become in the past two decades or so, I think there would be more chances of getting honest and useful answers to such questions in papers. One still finds them sometimes in papers of people who don't have to play "the game" anymore: papers by senior full-professor types are often quite interesting because of how they can say what they really think.
The algorithm has definitely improved – or at least has been able to collect more information to provide better results.
Of course, we know enough about these algorithms to know that they are based on action, not what you claim. Pretending that you find something annoying, but dedicating your attention to it is going to tell the algorithm to give you more of the same.
If you don't want to learn the truth about yourself, I can see why you'd want to steer clear.
Maybe, but in this context on this site I think it's safe to assume a lot of people are going to look at this with "ah! another subjective judgment that can be marked Objectively Correct by using an algorithm."
reply