Eh I saw worse than that. The last team I was on at Google before I left had metrics that for years genuinely got better, and these were real metrics users cared about, they weren't gamed or irrelevant (related to spam detection). Each quarter our manager would go to his managers and present the team's latest success. He'd show the metrics and say how they'd improved from last time. Praise would come down from on high. As time went by this little ritual become almost formalized, because said manager kept re-using the same presentation slides each time, just adjusting the numbers. And the praise was always the same. Morale was good.
One day, the metrics stopped going up. In fact they went down a bit.
What happened? The manager went to his manager and gave the exact same presentation, in which somehow the metrics were still great news and the team was still executing really well. And his managers didn't notice. The praise came back, just the same as it always did.
At that point me and one or two others on the team became very cynical and lost motivation, because we realized that nobody who influenced our careers seemed to have noticed what was really happening. Celebrating an endless series of utopian success stories, regardless of truth, was more important than tackling reality.
Some colleagues of mine worked at Google, and their reports do not reflect well on the management style there. People are pitted against each other and forced to downrate one vs. the other in periodic reviews, using a graphical tool built for the purpose (seriously).
Google claims that its performance review system is "peer-driven", but what actually exists is a bicameral system where you need strong peer reviews and good calibration scores in order to get a transfer or a promotion. If you play one game and not the other, you can get stuck.
Calibration happens every quarter and a manager can unilaterally ding you. However, a supportive manager doesn't guarantee a good score. He actually has to get involved in the horse-trading and tit-for-tat of the whole process. Some people get screwed because their managers don't show up to calibration and they get low scores.
Finally, you often have no idea what your score is. Meets Expectations is everything from the 2nd to 70th percentile of the company. At the high end of that range, you're OK-- you won't get fired, you can get a decent transfer. At the low end, you're fucked. However, you may have no idea where in that range you are.
It's a play-for-play copy of Enron's performance review system, except with even more managerial secrecy and, therefore, less accountability. In short, it's why Google has less future than past.
It's a case study of managers by managers for managers, of course everything went well.
It would take an epically shitty manager to not be able to sweep that kind of failure under the rug and not manage to blame some programmers / QA for their failure.
One thing I would recommend though is to spread out their 360-degree annual reviews into a daily 1 degree review which will net Google an additional 5 degrees of review every year.
This seems a lot like trying to invent a problem to justify your metrics rather than acknowledging that your metrics don't align with the actual performance of the team.
There's no indication in the article that the team was struggling or under-performing, and there's no reason to promote someone out of a position they're thriving in just because the way they deliver value doesn't neatly align with how you're measuring value, especially if you can plainly see the value.
Here's what I've seen in the past: exactly the scenario you described, the "Tim" is promoted to team lead or architect or something similar, and now their calendar is booked up and they no longer have time to do the thing that brings value (and that they enjoy). No one on the team is happy, everyone is stressed, and in a year or so you'll start bleeding members. Tim either hangs around and is a mediocre whatever position he is, or he leaves to be a whatever position somewhere else where he can start with a new context and without loaded expectations.
I was actually really surprised by this post. While yes, he worked at Google, and yes Google values their performance reviews, based on what I read in the VP of People's book "Work Rules", this isn't exactly the Google system.
I think this is a pretty heavyweight system, but he's right that any measurement is better than none. The time commitment though I bet will get difficult as they scale further.
One thing that really surprised me was no mention of the Google manager surveys they use to benchmark managers (https://getlighthouse.com/blog/google-management/). It's a big part of the book "Work Rules" and core to how they ensure their managers are doing their job well...which then is key to retention.
They also didn't mention it, but hopefully they're doing one on ones between those reviews as it's the in the trenches action that bridges any reviews that ensures things really get better and you have good things to reflect on in a review. Otherwise, it's hard to remember much more than a couple weeks ago.
I've been a manager at Google for nearly 10 years. The only time these types of analysis were done that I'm aware of, was at the VP level. They were used as statistical measure to see if the performance review system data fit probabilistic expectations. In other words, it was a measure of quality.
I've never seen this sort of thing done to affect individual or even small team scores and/or distributions.
At a previous job of mine, they implemented a "personal improvement plan" for employees who were below management's "desired productivity". There was no warning, no heads up that they would be measuring our GitHub accounts, and it had absolutely no context of what each person had been working on. These reviews came after a 360 peer review, and absolutely blindsided the team. Several of my coworkers left early for the day and didn't come in for the rest of the week. I got calls all day from various people asking what happened in my 1:1, what was said, etc...
I wish I could show management how negatively and immediately that move affected the culture & mood of engineering at that company. It was an overnight change, and going forward it basically killed all trust in management. I stayed around for a little bit longer, but I should have quit the second they started those measurements.
It absolutely murdered the confidence of our junior devs. Of the senior engineers who were put on it, none of them really had any idea what they were doing wrong or how to improve, as no feedback was given about any of that, just that they weren't enough. Some of our seniors had 15+ years of experience in software and were industry experts, but it didn't matter: They hadn't been hitting the number of commits and pull requests and tickets per week that management insisted on.
When management was confronted about it, they admitted that code metrics aren't a good measurement of productivity, but then insisted that they "needed something".
It turns out, they only used code metrics as a bludgeon to get rid of the employees they wanted gone, and since then that's all I've been able to see code metrics as. They're a battering weapon used to give HR a performance reason to justify firing you.
It sounds like you have a good manager, so we have different experiences.
Let me explain my Google manager. His MO was, about 1-2 months in, to use fake performance problems to get people to disclose health problems, then use knowledge of their health issues to toy with them. I have tons of evidence for a pattern of this with him. I also have (verbally) that HR knows it to be a long-standing problem, but does nothing because if a manager has a reputation for "delivering", that's carte blanche to treat reports however one wishes.
HR's job is to step in and right things when managers fail, and at Google, they refuse to do that. They see their job as to protect managers. I know that most companies are this way, but it's disgusting.
There are a lot of abuses of power by management at Google, and HR does nothing about it. The going ideology is that if a manager is "delivering", his word is gold. Combine this with a Kafkaesque nightmare of closed allocation, and you get a lot of ugliness.
I have nothing but respect for the vast majority of the engineers I met at Google. But I cannot respect a management structure that thinks closed allocation is appropriate for a tech company, or that making political-success reviews part of the transfer process is morally acceptable. Google's performance review system is a play-for-play copy of Enron's. This has completely fucked up what should otherwise be an awesome technology company.
I would actually support a class-action suit by Google's shareholders against the managers who instituted closed allocation and calibration scores (breach of fiduciary duty). Those assholes are guilty of destroying several billion dollars worth of value, and justice should be sought against them. Employees who were burned by that horrible system should also be plaintiffs in this suit, because that shit fucks up peoples' careers, too. It's just all-around wrong.
A previous employer had had that rampant feedback inflation for years, and reset it one year. The Big Boss - quite reasonably in my mind - said “our expectations are high and you should feel proud of meeting them”. But the company culture was still set up around “strong exceeds” being the norm and “meets expectations” being code for “massively underperforming” so that was an uphill battle. And many of those who had gotten used to “strong exceeds” really were massive underperformers. Really to make this work they should have deleted the entire history from the HR database.
Performance evaluations are subjectivity masquerading as objectivity.
> In each of these cases, I brought these issues to HR and senior executives and was assured the problems would be handled. Yet in each case, there was no follow up to address the concerns — until the day I was accidentally copied on an email from a senior HR director. In the email, the HR director told a colleague that I seemed to raise concerns like these a lot, and instructed her to “do some digging” on me instead.
Then, despite being rated and widely known as one of the best people managers at the company, despite 11 years of glowing performance reviews and near-perfect scores on Google’s 360-performance evaluations, and despite being a member of the elite Foundation Program reserved for Google’s “most critical talent” who are “key to Google’s current and future success,” I was told there was no longer a job for me as a result of a “reorganization,” despite 90 positions on the policy team being vacant at the time.
The perception m0zg outlined still seems correct. The system encourages deceptive reporting of metrics. If you can take credit for improving things with metrics, while causing far more damage in the process or making no real contributions, you will still get promoted.
No, I'm pretty sure it's true. I had to spend a ton of time explaining what I did and its values to my managers, like writing "ELI5" docs and then suddenly when they "got it" they paid very close attention to my work, and formed a team around me and established scope and got headcount.
This is the messaging I got every single quarter:
"""David, your work is very good. All the people you work with say you're great and it's clear you are adding value, but you need to spend more time finding metrics that demonstrate that value and improving those. Also, if you want to get higher ratings, here is a whole bunch of extra management work and document-writing to do. If you do that, we can make a case for Exceeds and eventually a promotion"""
What I concluded was that I was better as an L6 than an L7 (L7 at google has a lot more responsibility, and far fewer job opptys) and I knew the value I was adding to the company without having to spend weeks cooking up metrics dashboards.
At google you're not going to be fired, if you're an FTE, with anything less than 6-9 months of feedback and runway to get back in the air (few exceptions)
The problem, so to speak, is that companies are getting extremely data driven, and constantly need some form of metrics to evaluate their employees.
This again can turn into a culture where doing what's expected is devalued to lowest score - because you have that system where there has to be bottom and top performers.
And you sadly end up with a situation where workers are (almost) embellishing their projects to get more visibility.
I really think, companies should actively inform their team leaders that metrics and scores are not an adequate measure for performance. E.g., a person may have great scores, but is in actuality just micro-managing their incompetence at the cost of everyone else's performance. Metrics may indicate a certain behavior, or they may hint at the opposite, or they may be even somewhat random, as they record just a certain slice of activity, which may or may not be representative and may be of more or less significance to an individual's total work process. Performance is still best judged by performance, that is, output and resource management and, maybe, how much and what kind of irritation a person causes in the given organisation. (Mind that there may be something like positive irritation, as well.) In the light of this, these questionable products are actually quack. (Sorry to have to say so.)
They 'know they are imperfect' if you bring it up, but they act like they are perfect. That is why it's better to move metrics than do anything actually beneficial, everyone is justifying their high performance review. If you hit your metrics your manager looks good, that means their manager is hitting their metrics, and so on. The people that suffer are the shareholders and users, but management is always optimizing to transfer as much wealth as possible from shareholders to themselves and you ride the coattails of that if you just chase metrics.
At a lot of companies, first the winners and losers are selected, then the metric procedures and goals are defined to document the pre-selected outcome. Manager A spends 20 minutes finding memes and obviously is the better manager because he met the expected pre-defined goal result while also having 20 minutes left over to meme-search and at a personal level is better because the report was more entertaining, so Mgr A gets promoted. Mgr B sees that, understands his spot in the hierarchy was baked into the cake before the metrics and goals were even defined so there's no point working harder on the supposed real job, and if memes worked for Mgr A, he may as well spend 200 minutes meme-searching, and if he doesn't get promoted he should meme-search for 400 minutes next time...
There is usually a compressive effect where if everyone in the industry is un or under employed unless they're in the top 5%, then only the top people are employed and there's not much "spread" to rank results on anyway. How many companies only hire "top ivy CS grads" or "rockstar ninjas", for example? I'm just saying a bell curve matches reality and enables usable predictions if you're looking at a sample of the height of the entire US Federal Census, but its meaningless if your sample is NBA championship winners. You can't cargo cult 80s era Jack Welch style rank -n- yank the B and C level players if it would be a CYA HR firing offense to have ever hired a non A level player. That's what supply and demand mismatch does to a system. The entire concept of competitive metric result reporting is obsolete.
I get where you are coming from but I still disagree that we should try and game the system like that. Instead our peers and CTOs should grow the "realism" brain lobe.
This hits even closer home to me because I am about to get evaluated this Monday and my PR performance has been pretty sub-par in the last 3-4 weeks -- I've been put in a project that has been abandoned for like 3 years and only recently picked up again and 90% of what I do is documenting and taking notes and creating GitHub issues; coding is sadly still a low priority activity because there's still chaos.
But you know what? I don't care anymore. Let them fire me if they use broken metrics. I am doing my work very diligently and I am making sure the project will be in a better shape after me. If they can't appreciate that then I am not bothered by such people viewing me negatively. They are in the wrong, not me.
One day, the metrics stopped going up. In fact they went down a bit.
What happened? The manager went to his manager and gave the exact same presentation, in which somehow the metrics were still great news and the team was still executing really well. And his managers didn't notice. The praise came back, just the same as it always did.
At that point me and one or two others on the team became very cynical and lost motivation, because we realized that nobody who influenced our careers seemed to have noticed what was really happening. Celebrating an endless series of utopian success stories, regardless of truth, was more important than tackling reality.
reply