Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Some colleagues of mine worked at Google, and their reports do not reflect well on the management style there. People are pitted against each other and forced to downrate one vs. the other in periodic reviews, using a graphical tool built for the purpose (seriously).

It's "work/life" balance, BTW.



sort by: page size:

It sucks that it’s not commonplace for managers to be accountable to their reports.

Google has kinda structured things in this way, at least as far as I can tell from the outside. Managers can’t decide who is hired or fired and performance reviews are done by committee.

Seems pretty egalitarian but I’m told that even still, politics becomes an issue.


I was actually really surprised by this post. While yes, he worked at Google, and yes Google values their performance reviews, based on what I read in the VP of People's book "Work Rules", this isn't exactly the Google system.

I think this is a pretty heavyweight system, but he's right that any measurement is better than none. The time commitment though I bet will get difficult as they scale further.

One thing that really surprised me was no mention of the Google manager surveys they use to benchmark managers (https://getlighthouse.com/blog/google-management/). It's a big part of the book "Work Rules" and core to how they ensure their managers are doing their job well...which then is key to retention.

They also didn't mention it, but hopefully they're doing one on ones between those reviews as it's the in the trenches action that bridges any reviews that ensures things really get better and you have good things to reflect on in a review. Otherwise, it's hard to remember much more than a couple weeks ago.


Google claims that its performance review system is "peer-driven", but what actually exists is a bicameral system where you need strong peer reviews and good calibration scores in order to get a transfer or a promotion. If you play one game and not the other, you can get stuck.

Calibration happens every quarter and a manager can unilaterally ding you. However, a supportive manager doesn't guarantee a good score. He actually has to get involved in the horse-trading and tit-for-tat of the whole process. Some people get screwed because their managers don't show up to calibration and they get low scores.

Finally, you often have no idea what your score is. Meets Expectations is everything from the 2nd to 70th percentile of the company. At the high end of that range, you're OK-- you won't get fired, you can get a decent transfer. At the low end, you're fucked. However, you may have no idea where in that range you are.

It's a play-for-play copy of Enron's performance review system, except with even more managerial secrecy and, therefore, less accountability. In short, it's why Google has less future than past.


Surely pretty much every large company does this? eg analyze performance reviews and other data to see if an employee is underused/unmotivated/etc

Is it surprising Google does it?


I've been a manager at Google for nearly 10 years. The only time these types of analysis were done that I'm aware of, was at the VP level. They were used as statistical measure to see if the performance review system data fit probabilistic expectations. In other words, it was a measure of quality.

I've never seen this sort of thing done to affect individual or even small team scores and/or distributions.


Microsoft's review system sounds like an even more vicious variety of what Google has, which is more than bad enough to fell a great company.

Google has a bicameral system consisting of (a) manager-assigned "calibration scores" that are the outcome of stack-ranking nonsense and (b) annual peer reviews. It seems like a bicameral review system would be a good thing, by removing career SPOFs. If a bi- or multicameral system is well-designed, that's exactly what you get: multiple paths to success.

Where Google fails it is by making it an AND-gate rather than an OR-gate, even for lateral transfers, much less promotions. (Without good calibration scores, transfer is impossible.) At Google, people need managerial support AND peer support to advance, which means there's endless jockeying for visibility in addition to manager-as-SPOF. It really is the worst of both systems.

In a properly-run company, you have several review signals: peer review, extra-hierarchical work, demonstrated curiosity and will to self-improve, and managerial review. For firing (excluding people who do something outright wrong) the decision should be based an AND-gate. If someone does poorly by his manager AND can't get peer support AND can't find a transfer, then it's time to fire him. Not before. That's how decent companies do it. On the other hand, for title upgrades and pay raises and better projects, decent companies use an OR-gate. If he gets good managerial reviews OR good peer reviews OR has other managers interested in taking him, then treat the employee as successful: give him a decent raise and let him transfer as he wishes.

Now, managerial positions are a bit different. There, you actually want to see strength in several signals before you give someone power over other people. So there's justification for making selection into management be based on an AND-gate. You just really need to be sure that the person can lead. That's different. But title upgrades, pay raises, project allocation, autonomy and transfer opportunities should be based on an OR-gate; if one signal indicates potential for success, move forward. If you can't see this, then you're FAL and you should not be allowed to leave the house without assistance, much less make decisions that affect other people.

Google fucked up its bicameral system by making it an AND-gate for promotions and transfers and an OR-gate for adversity. That's the destruction of what was once a great company. It has cost the software industry billions of dollars worth of value. It sounds like a similar billion-dollar immolation occurred at Microsoft.

Microsoft's system, advanced half a decade or more further in necrosis, seems much the same but worse. It seems like these types of systems get worse over time because more people want to tack on their own shitty ideas as more people develop the system. They're not happy enough with other peoples' pre-existing shitty ideas; they need to make a personal mark on a shitty system by making it noticeably shittier, enough that they hear people complaining about their changes in the cafeteria (which they justify as a good thing because "those people are obviously no good, because good people have nothing to worry about come review time.")

I am glad the Vanity Fair article got published. I have no strong feeling about Microsoft either way, and I have a lot of respect for the great work coming out of Microsoft Research, but it's about time that we see bad HR policies leading to outright exposure and frank humiliation.


I worked at companies where managers are little tsars of their turf, and Google.

The difference with Google is that: 1) you can give feedback to your manager, both anonymously and explicitly, and they'll affect their perf; and 2) your success, in terms of impact and promo are part of your managers success; 3) perf committee will challenge and can override your manager if the rating given seems too low/high given the evidence.

These forces while do not take power away from manager completely, they provide some checks and incentivize managers to respect and support their reports.

Of course, all these nice things come at a cost, that is perf becoming a somewhat transparent and heavy process that eats everyone's time and mental energy.


Are reviews of managers by employees that uncommon? At Google every manager is reviewed by a survey of their direct reports every performance cycle and that certainly feeds into assessment of manager performance.

Eh I saw worse than that. The last team I was on at Google before I left had metrics that for years genuinely got better, and these were real metrics users cared about, they weren't gamed or irrelevant (related to spam detection). Each quarter our manager would go to his managers and present the team's latest success. He'd show the metrics and say how they'd improved from last time. Praise would come down from on high. As time went by this little ritual become almost formalized, because said manager kept re-using the same presentation slides each time, just adjusting the numbers. And the praise was always the same. Morale was good.

One day, the metrics stopped going up. In fact they went down a bit.

What happened? The manager went to his manager and gave the exact same presentation, in which somehow the metrics were still great news and the team was still executing really well. And his managers didn't notice. The praise came back, just the same as it always did.

At that point me and one or two others on the team became very cynical and lost motivation, because we realized that nobody who influenced our careers seemed to have noticed what was really happening. Celebrating an endless series of utopian success stories, regardless of truth, was more important than tackling reality.


It sounds like you have a good manager, so we have different experiences.

Let me explain my Google manager. His MO was, about 1-2 months in, to use fake performance problems to get people to disclose health problems, then use knowledge of their health issues to toy with them. I have tons of evidence for a pattern of this with him. I also have (verbally) that HR knows it to be a long-standing problem, but does nothing because if a manager has a reputation for "delivering", that's carte blanche to treat reports however one wishes.

HR's job is to step in and right things when managers fail, and at Google, they refuse to do that. They see their job as to protect managers. I know that most companies are this way, but it's disgusting.

There are a lot of abuses of power by management at Google, and HR does nothing about it. The going ideology is that if a manager is "delivering", his word is gold. Combine this with a Kafkaesque nightmare of closed allocation, and you get a lot of ugliness.

I have nothing but respect for the vast majority of the engineers I met at Google. But I cannot respect a management structure that thinks closed allocation is appropriate for a tech company, or that making political-success reviews part of the transfer process is morally acceptable. Google's performance review system is a play-for-play copy of Enron's. This has completely fucked up what should otherwise be an awesome technology company.

I would actually support a class-action suit by Google's shareholders against the managers who instituted closed allocation and calibration scores (breach of fiduciary duty). Those assholes are guilty of destroying several billion dollars worth of value, and justice should be sought against them. Employees who were burned by that horrible system should also be plaintiffs in this suit, because that shit fucks up peoples' careers, too. It's just all-around wrong.


It's a case study of managers by managers for managers, of course everything went well.

It would take an epically shitty manager to not be able to sweep that kind of failure under the rug and not manage to blame some programmers / QA for their failure.

One thing I would recommend though is to spread out their 360-degree annual reviews into a daily 1 degree review which will net Google an additional 5 degrees of review every year.


> the fundamental problem with Google's performance review process is that you get to choose who reviews you

Except this isn't true at all.

Your manager reviews you. Other people write feedback (you choose these people, though people can write unsolicited feedback for you). Your manager should be obtaining a holistic view of your work rather than just relying on the solicited feedback.

Further, there is an oversight process where managers must justify their ratings to other managers. This attempts to prevent the "your manager likes you so you get a good review" problem.


My employer uses it. I hate it, but then, performance reviews aren't exactly fun stuff to begin with.

My second hand experience says that they are very prone to firing people.

At first glance this seems good, because I've certainly worked places where dead weight was permitted and there seemed to be no getting rid of them.

The problem is: how do you know which people are good? I believe you can come to pretty accurate conclusions with the right people in place at each level of your hierarchy. However, one wrong person can completely skew your perception of who is doing good work.

I've also never seen a metrics-based review system that was actually designed for measuring humans. It must work really well on the robots that Google tests it on, but people don't map to 1-5 point scales or bell curves as well as you might think.


Usually in large orgs there is multi-layer reviews, and this is a quite efficient indicator;

if the colleagues say that you are not doing your job well

+ the managers say you are not doing your job well

+ the people under you say you are not doing your job well.

+ the customers say you are not doing your job well.

Then why an employee would even stay in a company where literally everybody thinks they are not helpful ?

When you are a relatively large customer/partner of Google, you sometimes interact with genuine impostors, who are not helpful, but you still have to go through them because there is no alternative.

So a bit of clean up in the bad elements is actually good for both bottom and top line of the company.


I caveated my statement because my experience in this is obviously limited, I doubt anyone has a wide view of the industry but...

> The process has always been, every single time: identify low performers, find reasons to fight for them to stay (do they have gifts the company isn't properly utilizing, etc.)

Strongly doubt this is done effectively in anything but smaller organizations. Imagine an organization the size of Google, doing this analysis effectively would involve consulting thousands if not 10s of thousands of managers, some of whom will be on the chopping block themselves. Organizations pretty much never want to telegraph these kinds of moves in advance, and looping that many people in 100% guarantees the entire process will be leaky as hell. Sure these organizations will play games with performance reviews to try to make some attempt at coming up with a more meaningful filter than a dice roll, but the accuracy is at best, questionable, and as we've seen in the recent layoffs a good performance review is a pretty weak form of protection.

Your second sentence even provides a perfect scenario around exactly the kind of randomness that comes into play

> essentially pruning unhealthy/unproductive branches from the tree trunk to keep the rest of the tree healthy.

High performer but working on a product the company has decided to de-prioritize? Bye.


If they don't then they certainly should, it sounds like a toxic workplace.

What annoyed me about that slide was the comparison to a pro sports team, which shows how the "ultra-elite" Netflix lacks the critical thinking to see that this comparison doesn't hold.

For one, performance in a pro sports team is quantifiable. So-and-so player scored so many points vs so many tries, whereas the average is so and so. It's relatively easy to plot a normal distribution.

Not quite so easy for software developers. Companies like to say they can measure performance but we all know they can't. Not easy to compare someone who got lucky and worked on greenfield projects with someone who drew the short straw to maintain some legacy code. If the latter struggles, does this say anything about the former? Can you predict the performance had the roles been reversed? No.

So it always falls back to a mix of "how much does your manager likes you" and "how well you can argue you performed". Or if you're at Google then you can add "how well you chose and optimized metrics", regardless of whether you're measuring the right thing.


There are likely a lot of downsides to their PSC system, but Meta is way more rigorous than this. They take performance reviews written by the employee themselves, reports, peers, managers, and calibrate them in meetings involving peer managers and several levels of managers above.

It's probably one of the more rigorous systems in the tech industry that at least avoids this kind of bias, though there are certainly ways it can be gamed.


I worked for a company that did these kind of performance reviews and it was exactly as you describe. I found it basically "unworkable" because you end up with 2 jobs- (1) the actual technical job you were hired to do and (2) trying to not be screwed by the performance process. So you have to play politics, constantly watch what others do, try to advertise and sell the work you've done to managers. What if you're only interested in writing good code, not bragging about it? Especially if you're in tech because your skills are technical, not the schmoozing stuff. Some of the best developers are either introverted (which is fine :) ) or simply more interested in elegant technical solutions to real actual problems, than political BS. This performance process also makes people become rivals. They have to show they're better than their peers. Which is 100% destructive to teamwork. Then there's the unpredictability. In a good workplace, someone can work hard, deliver, forge relationships and as time goes on they become very trusted. What you don't want is that person constantly distracted by having to prove themselves. You don't want the situation where years of good work suddenly for no apparent reason comes up against something in performance management and get marked down. Another case, I saw 2 colleagues have misunderstandings due to geographical distance, culture, possible mental health issues by 1 of them. It all got worked out and fixed amicably, no lingering grudges, but one of them still I believe got a sub-par end of year, thus blocking a way to leave that situation behind. That cannot be helpful. Also such performance systems promote those more skilled at gaming the system than those who would do best in the more senior role. Thus you end up with incompetent managers, and technical staff that are good at stealing ideas and flogging them as their own, but can't really code well. I'm glad I got the ** out of there, into a workplace that values friendship and collaboration instead !! Is this the norm in big tech these days? If so.. well to me it just adds a big random element to everything. You can try to work hard, gain skills, deliver, but other factors can mean you aren't recognised. This was always true to some extent, you need a decent boss , company needs to be doing well etc, but my feeling is the "good old days" were more predictable. I wonder if this performance cr*p ultimately drives away good engineers, and it'll all come full circle when companies realise its a a bad idea. But then, I thought that happened years ago at Microsoft when they ditched "stack ranking"? Maybe this keeps bubbling up? I mean , some daft managers think its a good idea, it looks like its working for a while, then anyone decent gets hacked off and leaves to competitors, then they abandon it, till later some so-called bright spark re-invents it?
next

Legal | privacy