>as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work
The Hawthorne effect describes how any change in environment can temporarily lead to positive effects. The sentence you quote is more akin to Goodhart's law (https://en.wikipedia.org/wiki/Goodhart%27s_law).
> as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.
The Heisenberg Uncertainty Principle has found a new refuge. While atoms are evading it ([1]), people implement it.
While I think there is a lot of truth to this, I don't see how it can effectively be applied to our existing educational institutions. They've sort of become a self-serving bureaucracy at this point, so even without misguided policy pushing them in the wrong direction, I don't think doing the best work they can is actually their primary focus.
Naive response: Throw out the bureaucrats. Leave those with subject knowledge. Tell them to do their work to the best of their ability? Stop measuring things. Talk to them once in a while.
Your question contains a good chunk of our entire world's economic and bureaucratic problems -- namely people in power with the emotional intelligence and experience of a chihuahua.
Throw out the bureaucrats. Leave those with subject knowledge.
My wife works in academia and all the 'managers' in her research department are experienced researchers that often split their time between management and research. And it just does not work according to her. They don't understand the job of a manager, deem it less important than their research duties and basically see dealing with 'minor' personnel issues as a waste of their time. To run any organization of a non-trivial size requires some structure and bureaucracy. And you need people to understand, set up and own those structures.
Minor stuff like spending 3 weeks "unemployed" and without a salary because her boss couldn't be arsed/was too busy to sort out the paperwork, grant money almost getting lost in the shuffle, stuff like that.
And even if you perceive a personnel issue as genuinely minor and you genuinely do consider it a waste of time, telling someone that problems they perceive as genuine are so insignificant to you that you don't even want to consider them, despite that being part of your job, is a great way to build resentment and destroy teams. A good manager should be able to quickly and efficiently handle minor issues without making people feel ignored and insignificant.
you're making the case for an administrative assistant, which is a very different role and set of responsibilities than a team of executives or managers.
No, this is not the solution; I've seen this sort of thing happen in academia even with professors with assistants (thankfully my own professors have never been like this). For example, the administrative assistant presents the professor with an employee timesheet that needs to be signed in order for some employee to be paid, and the professor says, "thanks, I'm busy now, just leave that here and don't bother me again" and then doesn't sign it, causing the employee not to be paid on time. The administrative assistant does not have the authority to sign it himself and does not have the authority to tell the professor what to do (and may be fired by the professor if they make themselves a bother by trying to nudge the professor into doing the paperwork).
The problem here is not that the professor is too disorganized, the problem is that some professors either don't perceive managerial tasks as important, or they view their managerial duties as impositions.
well, the problem is professors lack the motivation or ability to properly to administrative tasks so somebody else needs to do them instead.
however, installing a massive and expensive and authoritarian bureaucracy seems like an extremely bad solution as well and yet that is what we've got.
I'm suggesting that that bureaucracy ought to be dismantled and be replaced by a smaller number of more focused administrators who have the authority to do administrative work (i.e. are allowed to sign the payroll) but are not members of the managerial class that is strangling the life out of the academy.
I like your suggestion. Yes, separate the job of professor from that of administrator; enhance research independence at all levels by reducing administrative power over the professor from above, and reducing the administrative role of the professor on those below them.
I note that, if they have the authority to supervise employees and allocate budgets, then under many systems this administrator would be called a manager rather than an assistant (which is appropriate b/c this person would be a peer, rather than a subordinate, of the professor). I imagine that the admin would administer the budgets that were formerly administered by the professor, and other researchers who work in the lab would report to the admin rather than to the professor (the admin would consult with the professor over hiring/firing but would not be required to take their advice). The independence of the individual professor is enhanced but, due to the loss of full control of the budget and hiring/firing, their power over other non-professor-level researchers in the lab is decreased.
Really though, what is there in an university that justifies a dedicated team of managers? One person in power (say, the dean) is enough to make sure grants are coming on time, and negotiate those beforehand. They need one accountant to make sure salaries, taxes etc. are in the legal bounds and people receive what they were promised. Honest question: what else is needed?
Most of the managerial work I've witnessed in an university is easily doable by several slightly complex Excel spreadsheets. It just requires people to be diligent and not overlook the unquestionable need for accountability and visibility.
As an additional question -- what exactly is there to "run" in an university?
Really though, what is there in an university that justifies a dedicated team of managers?
The same thing there is in any company with several thousand employees (let's not even start to consider the administrative hassle of dealing with several thousand students). Probably a lot more actually, since very few companies are doing research in oriental languages, nano technology, abstract algebra and contemporary music at the same time.
You might be right, but my girlfriend currently is a "leader" of her group in the university -- she's responsible to bring the student's "grade books" (sorry, my English fails me here) to the proper administration, she creates polls for the freely choosable disciplines, makes calendar events for the occasional meets etc.
Takes her 2-3 hours a week and is FAR more efficient than the awfully rigid systems I come from, 15 years ago.
Self-organization is a powerful phenomena. It can't solve everything for sure, but a good balance between it plus a smaller bureaucracy has in my eyes shown a strong promise for the future.
Given I know someone who works rather hard, full time, on negotiating grants and other contracts and they're far from the Dean level, I'd suggest you're off from the amount of work required to run and administer a university by a pretty hefty margin.
It's more likely that the dean is just not the person who should be writing the grant proposal. Professors whose groups are actively working in the area the grant is for will write a much better proposal. It's also expected that as a professor you are -- to a significant degree -- capable of funding your group.
I'm not even talking about grant writing. I'm talking about things like negotiating licensing agreements for private industry, contract negotiation to make sure publication and graduate students are protected, etc.
More that's a full time job all it's own, and is only one aspect of grant administration. Unless the Dean is some sort of omniscient AI, that's not going to work out well.
>"could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?"
Sounds very reasonable. Does anybody have good examples of this approach? I have the impression that this is similar to the environment of Bell Labs during the "golden years".
If one were being truly radical about this, a possible solution is to replace researchers' salaries with pensions -- remove the incentive to retire in place by allowing you to retire at home if that's what you want.
And then you have a pool of professors made up of people who can afford to teach for free, or perhaps create a system where some finance leech buys your future pension for NPV payouts now.
a system where some finance leech buys your future pension for NPV payouts now.
Isn't that called a loan?
I imagine that smart finance-leeches will tend to avoid loaning against 100% of the value of the pension, for much the same reason it's hard to get a mortgage that takes 100% of your income to repay. Beyond that -- where is the problem?
Beyond that, the problem would be getting the same results as paying someone directly, while reducing efficiency by inserting a middleman to take a piece for... why, exactly?
And there it starts. Don't do that. In Germany we had this discussion in the 90's. All kinds of measures where taken into place to catch the slacking professors. Now we are left with a system the most talented don't want to work at.
Most academic people I know love their work. They do overtime although they are paid much less than a comparable position outside academia. Just live with the 5 percent or so slackers. The rest of the pack will squeeze out much more when not tortured with bureaucracy.
You must rate teachers. Using explicit external criteria, which as the article indicates, is inherently manipulative and biased, and thus counterproductive. Or you can use implicit internal criteria -- feedback from the participants who were hired to embody the objective function, to walk the walk. Given the tradeoffs, what better way is there to judge than anonymous peer review?
Retiring in place is actually quite boring, and the feeling of uselessness that results is depressing. People won't actually do that unless the alternative is worse. If you hire people who are passionate about whatever it they were hired to work on, they might not do work that is profitable, but they'll do work that is interesting and has some benefit.
Those people will still have a passion for something, or they're going to be profoundly unhappy. Consumption (whether it be TV, video games, drugs, etc) can only provide some distraction from the pain, it doesn't make it go away. In fact, the longer you try to deal with the pain by distraction, the worse it gets.
Most of Polish professors are actually retired in place. They do the minimum required amount of teaching and put their names on (usually completely unoriginal) papers of their subordinates. Most of them didn't do any significant original research in decades. To pass the time (and to gain power to protect themselves from being kicked out etc.), they take on administrative functions such as deans, heads of units etc.
See, we have two ends to optimize the system and we have to choose one. Either the system encourages the most talented to thrive or it's designed for mediocre individual performance where the worst performers are kicked out. In the former high performers thrive while in the latter the best we get is guaranteed non-zero output.
There is a beautiful dualism in rewarding and measuring. It's actually quite hard to invent good measures for creative work, so whatever measure you come up with probably does not measure what you want it to measure. The second problem is from intrinsic versus extrinsic motivators. Extrinsic motivators like pay rises or the risk of losing ones job work fine only if the task is simple and mappable to linear performance - like, say, logging. For any creative efforts extrinsic motivators kill performance.
Thus, it's very hard to measure creative work, and even if you could, you would not really want to use that data for anything because that would kill performance.
A smaller-scale UK based example (which gets a mention in the blog comments) is the MRC's Laboratory of Molecular Biology in Cambridge. Still very much a going concern although I suspect it's culture has changed somewhat over time.
I would agree ... nowadays ... not sure the hiring function of any institution is capable of doing this anymore ... it's sad that the good old Bell Labs days seem over ... how this impacts the future remains to be seen though ...
[Google, Facebook, Microsoft] Research are probably similar in many ways -- well funded by rich companies that are self-interested in moving the field forward. We'll only know their true impact decades down the line, of course. MapReduce has certainly been very influential.
Bell Labs wasn't just a playground with infinite cash, they were very much supposed to figure out telecoms stuff for the benefit of Bell -- they just had a very wide berth for determining what that something was. I suspect the same is true for the aforementioned research departments.
(That said, I don't think either, nor Bell Labs, is a meaningful model for academia in general)
I'm sure this resonates with university lecturers all over the Western world. The late writer and university academic Mark Fisher has some interesting points about this effect in his 2009 book "Capitalist Realism: Is there no alternative?"[0]. He saw it as an inevitable byproduct of the infiltration of neo-liberal ideology into the academic sphere. Setting targets and measuring performance indicators are justified with efficiency arguments, but ultimately cause an increase in bureaucracy and a decline in mental health.
I have often argued that "being realist" is a failure in and of itself. If people said "well, there's a king and that's what it is", we'd never have got to where we are now.
Ideologies (and ideas) are, in my view, stronger than facts, which often only represent the past, while ideas shape the future.
(these days the word "fact" is a bit of a trigger, but I thought about that well before all that jazz, and the current phenomenom seemed to prove my point to many of my friends)
Fun fact: that's the original definition of the term Realpolitik, as coined by Ludwig von Rochau, a German writer and politician in the 19th century.
He said that the great achievement of the Enlightenment had been to show that might is not necessarily right. The mistake liberals made was to assume that the law of the strong had suddenly evaporated simply because it had been shown to be unjust. Rochau wrote that "to bring down the walls of Jericho, the Realpolitiker knows the simple pickaxe is more useful than the mightiest trumpet."
I've been reading "Realpolitik: A History" and I think there are some useful lessons for today.
I think one has to be realist about their tools and idealist about their goals. And the drive is the most important factor. If they're just going through the motions without intent, without ideal, it won't lead anywhere.
The will to preserve (imo absurd[0]) or the will to change.
Of course, there isn't any one true model of thought. So I'm surely wrong.
[0]: I don't think anything can last, civil rights included, but things can come and go
I think a person can be both, and indeed, must be both if they wish to change the world. I find that I have less and less patience for people with high-minded, admirable ideals, who refuse to do anything other than take the uncompromising, principled high-road. I've begun to realise these sorts of people are just self-indulgent narcissists, who fundamentally don't care about [the environment | racism | gender equality | defeating the lizard-people]. What they really care about is feeling good about themselves, having others think well of them and retaining a sense of moral superiority over others.
If one actually wants to achieve their ideals, I truly think the 'uncompromising high-road' approach is a footgun. I've seen it play out a number of times. The Australian Greens party, for example, voted down a carbon emissions trading scheme about 5 years ago because it didn't conform to their exact ideals. The result? They were eventually forced to accept a less stringent 'carbon tax', that was ultimately repealed about a year after it was enacted. And not once did I see any introspection, nor any comprehension that their 'principled stand' resulted in the worst possible environmental outcome. I can just imagine the self-congratulatory "we stuck to our principles, we can hold our heads up high" BS in their party room. Pity about the environment, but I guess that's beside the point.
I think that we're more likely to achieve the 'just' or ideal outcome when we address the 'is', not the 'ought'. We have to work with the situation that is in front of us; not the utopia in our heads. Don't get me wrong, having ideals is important: without a destination in mind you will find yourself on a road to nowhere. I consider myself an idealist. But I personally find that 'results' are much more satisfying than abstract ideals and lofty thoughts.
If you are a high-minded idealist, that's great. It's the first step towards a better world. But you should also honestly ask yourself: What is my true aim? Do I want to feel good, or do I want to do good?
Setting targets won't solve anything since this is a structural problem. You cannot leave those structures in place and expect a different outcome. That is just treating the symptoms.
In my view the solution is simple: instead of letting bureaucrats and administrators decide, let parents and students choose the classes and teachers they want. Those teachers that offer classes and a teaching style that students actually want will do fine, those that don't will have to find other jobs. Problem solved without artificial metrics that never work anyway.
Besides, it's the students money. Why shouldn't they be able to choose what they want to learn and who they want to learn from? I suppose you could make the authoritarian argument that the experts know better than you and should be allowed to force you take the classes they think best. But that kind of thinking is the root cause of the current problem.
This might be the start of a good approach, but I think it would largely reward popular teachers and easy classes. Back in university I would not have trusted myself to decide which were the necessary classes. With family, stress, work, etc. it would be too tempting to take the easy road. Maybe some combination of students and teachers...
I think rewarding popular teachers would be a feature rather than a bug. As for the too-easy problem, well, if you had to take a certification exam (like, for example, the bar exam) then you would have to choose the courses you thought would most likely help you pass.
My larger point was that the current educational system has stagnated and seems to be frozen in the late 19th century. It is highly--even actively--resistant to change. It is very expensive for the results it delivers. Whatever the right answer is (if there is one) requires thinking about the root causes of the problems and possibly fundamental change. Unfortunately, with topics like these people seem to want a different result without changing the underlying system that produces that result.
The problem with that is then who teaches will just become a matter of who knows the parents more and who gives good grades... There is some need for administration but not near the extent that exists. The waste of money is truly a large problem. My school district has spent millions in useless computers and radio towers that 1/40 of the school population will ever touch. Reducing the number of tests children take may help; I don't think giving parents and students free reign over teachers would be best.
> I suppose you could make the authoritarian argument that the experts know better than you and should be allowed to force you take the classes they think best.
I'm not sure that's especially authoritarian rather than just practical. I have a CS PhD focused on optimization and machine learning. I trust a student to mostly decide whether they feel like computer science is a good field for them to go into, but once they've made the choice to work on machine learning, they have very few useful opinions to offer. I don't care if the student thinks statistics or linear algebra are useful, boring, whatever. Part of what that money of theirs is buying is for someone like me to make the plan that gets them from where they are now to where they want to be.
I don't disagree with your point. I was mostly thinking of general education and the high school curricula.
On the other hand, I'm not a big fan of the PhD style program. It seems old-fashioned and possibly obsolete to me. I would prefer to see it replaced by something like, in the case you mention, an advanced certification in machine learning. I don't know the field well enough to say what such a certification might entail but maybe testing and some kind of project?
I think what you're describing is a need that's filled pretty well today by a master's degree.
I don't think the PhD is so much archaic as just overused. It is intended to signify that a person has demonstrated that they can contribute significant new knowledge to a field of study, and incidentally, that they can teach (graduate) university courses. Society probably needs far less of both those traits than it can supply people who want to do them.
Yes please. I want my kids to compete in the job market against the millions of people who would be taught science and evolution in the religious schools that would clearly emerge under such a system.
As if that isn't already happening under the current system. But, hey, why even think about change? The US educational system is already so good at improving, not letting institutional inertia block new ideas, offering better value for parents and students,...
Say what you want about the inadequacies of our education systems, but they have helped to deliver the amazing world in which we live today.
Let us not go throwing the good out with the bad in pursuit of some ideal that isn't within reach anyway. There is no perfect answer, but i'm personally wary of unintended consequences of radical change. Our social, political, and cultural survival may very well hang in the balance.
Reminds me of James C. Scott's "Seeing like a State", which chronicles how various attempts to make complex, organically developed institutions organized and intelligible has historically made things much, much worse, from planned cities to collective farms. If it ain't broke...
So many early-career researchers (myself included) bemoan this system and it pushes many of the away from academia and towards fields where you aren't constantly playing tricks (fake co-authorship, salami publishing, over-hyping conclusions) to get ahead in a game where everyone's striving for a tiny number of badly paid jobs.
The authors themselves admit that NCLB was a mixed bag: "Specifically, we find that NCLB generated large and broad gains in the math achievement of 4th and (to a somewhat lesser extent) 8th graders. However, our results suggest that NCLB had no impact on reading achievement for 4th or 8th graders."
You'd think that you can employ drill practice to improve results in mathematics, but that approach won't work in English. American math instruction feels quite alien for foreigners, the emphasis is on the application of algorithms, not on understanding why and how they work. Equipped with plenty drill practice, an Amarican child can fill in the correct bubble on the test, no matter if or not it understands mathematics. Any understanding of mathematics gained in that system is purely incidental.
That's all a hypothesis and would require further testing.
All employers want their employees to "do the job to the best of their ability". But the freedom given to you to decide how your success is measured is proportional to your experience and past performance. There are those in both academia and private industry that get a lot of freedom to do their work/research.
But there is a hierarchy to this just like everything in nature. The exceptionally strong and bright animals have more freedom in why they choose to do and how they do it. And that system has worked to get us all this far.
The system this article is pushing against isn't fun for many but inevitable. The further you are from the top the more subject your performance is to poor evaluation schemes conceived by those far from the top too.
Scientists are employees in many contexts (almost all contexts). We may just have to agree to disagree on that one. Be they in service for an NGO, school, or oligarch.
But to say nature has no hierarchy? I suppose "hierarchy" can mean many things. But in my case I mean fitness and selection for it. The most fit are at the top. Nature tends to select for those better suited to their environment. And those best suited thrive and push the success of us all forward. It is how you got here and have what you have. Intellectual, emotional, and physical traits are all subject to this selection. The thriving academics with the most freedom are closer to the top. The best employees (however you want to define employee) tend to be given the most freedom and trust.
If you look at any nature's habitat of N kinds of animals, even there what you see can't be called a hierarchy. It's a semi-stable state on the verge of chaos which can be very easily broken by introducing a species of animals new to the area.
So no, it's not hierarchy. Lions can kill almost anything if they feel like it, but hyenas kill lions on a regular basis. Does that make them the top of the food chain in the savannah? Definitely not. It only makes a pack of hyenas stronger than a single lion or lioness.
Rock, paper, scissors. No hierarchy. It's the same in the human systems. Hierarchy is an artificial system enforced by humans and has almost zero relation to anything natural.
Not all lions kill as well as other lions though. Take a snapshot of all the lions in the world alive right now and rank them by their ability to kill and the number of offspring they have. Some are better at these two things (they killed more and attracted more mates). This is quantifiable and trivial to fit into a hierarchical scheme of some kind.
It isn't hard to extract similar hierarchies from other species...including humans.
Everything we or other life forms do comes from an attempt to make an improvement. Be it selfish or altruistic, it is always to alter some state of the world for a perceived benefit.
But our competence varies. So we all fall on some spectrum of competence in any given endeavor no matter how small.
From this variation and rank we can derive hierarchy.
there is such a thing as variation, and there is such a thing as varying degrees of fitness for some particular configuration [of whatever] in some particular environment [wherever and whenever]. that does not automatically imply hierarchy. that is an intellectual construct of your own mind. Hierarchy is a human social structure that you map onto all of nature at your own peril.
I have a different take its the lack of pay for professors and the lack of respect for public education (On average a college professor makes less then a public school teacher, in my discipline I would have averaged about $20k less IF I got tenure)
As a former academic professional (Librarian at a college) The high cost of college is based on the demands of students. 30 years ago there was no internet, no smoothy bar equipped fitness rooms, no pretty dorms and everything upscale. No pay increase for staff and I actually went through 3 years with no cost of living increases while the cost for college went up 15%.
Then the idea that Public Education in the US is a failure (Totally false narrative except in the cities and other low income communities). So people believe that private investment and legislation will fix things. This just has people coming in making and taking the money from teachers and the community and taking them else where.
>30 years ago there was no internet, no smoothy bar equipped fitness rooms, no pretty dorms and everything upscale.
How certain are you that these are the reasons school has gotten more expensive?
I'm a student at a school with a decent amount of cushy stuff and a high tuition, and I can't say it looks like that's where the money's going (or even being sunk). Housing got cushy once university policy stopped forcing us to live on campus, at which point it had to compete with surrounding options (before that we were charged through the nose for pretty tight quarters). The school-provided food options are, ... well, the university's definitely making money on those, and if they're not the company they contract out to is. I gotta say, I don't think paying for gym equipment has significantly held up salary increases on the academic side of things.
You might know better than I (and we're probably in different places), but looking around campus the things that seem to be sucking money away from paying the professors more is a) the ever-expanding number of administrative offices and b) the constant renovation/replacement/addition of buildings. I'll be the first to tell you my campus is an eyesore, but it feels more and more every year like they did that on purpose in the 70s so they wouldn't face much resistance in sinking a ton of money to replace it with a shinier one now.
I'm also not totally unwilling to believe the standard economic argument that gets thrown out all the time: with so many more people leaving school with advanced degrees, finding people to fill teaching positions is a lot easier.
When I ask the admins at my school about this, they laugh. If you look at the pie chart of where the money goes, it basically goes to salary. So the rise in cost has to be from a rise in salaries paid.
They say the driver in the US is health care, but there are also more people on the payroll.
LOL it is also the Admins that got the biggest increase in that pie. I lost all faith in any college admins and is 100% why I left.
There is no evidence that the payroll of colleges have increased and evidence points to lower salaries in since 1970. Administration numbers have increased 4 fold. Less then half of professors are full time now.
>30 years ago there was no internet, no smoothy bar equipped fitness rooms, no pretty dorms and everything upscale.
Weirdly enough, I went to school 10 years ago, and there was crappy internet, ugly dormitories with housing shortages, and decades-old gyms with "smoothy bar" being a totally unknown term. They finished building a new gym with a smoothy bar in my senior year.
They didn't add new dorms until after I graduated.
The high cost of college in most cases is actually fueled by the decrease in funding at the state level. The states have dramatically decreased their support for universities, federal funding can only do so much, and thus the only lever left to turn is tuition.
You could cut all student amenities out and most public universities would still be in budget crisis.
“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it.
-- Attributed to J.J. Thomson (although I've not been able to turn up a definitive citation -- anyone know where it comes from?)
Does make me think some people have been pondering this for a long while though.
I don't know where the quote is from but it represents my only hope for the future of research.
Of course, the hardcore bean-counters would not let that middle ground happen either; they are people in power who absolutely positively won't give you almost any leisure.
It's still a good idea however. And some managers would embrace it since the notion of "paying somebody for possible benefits 5-10 years in the future" is something which our current form of capitalism hates. But some of them are reasonable people and would accept a middle ground.
Another thing I was thinking about in the past is rotational work: given a team of 6 people, have 1 person always working on a research for at least a month with zero responsibility about the money-making activities. After that month -- or two, or six -- expire, return the person back to the capitalistic loop and put somebody else on the research duty. This however is rather clunky because people will have to be constantly caught up with where are the things currently in the money-making job and the research job.
Perhaps there are other ideas as well, but I am not aware of them.
In a university setting, it seems to me that teaching in combination with support from endowment (not research grants) can be one of the scenario for this ... key is still to have fundamentally good people with passion and joy for the endeavor ..
The thing that always infuriated me about computer science (my field), is that I don't actually need the big research grant. There are people who do, of course, but I could do most of what I found interesting and useful with a high-end desktop PC and some free time.
But universities don't operate that way anymore. It doesn't matter that all I need is $5K and some time -- the university needs their 40% cut of a multi-year, multi-million dollar collaboration. And so the entire system is set up to fund those projects and those projects only.
But you can only have so much free time, which means you eventually will have to "buy" time. Grants help you do this by allowing you to fund grad students and postdocs.
Not everyone who is a good independent researcher is a good man-manager. For some people, researching independently -- or perhaps collaborating with a few other independents -- might be a better route than being forced to spin up a research group (and more-or-less-inevitably finding themselves with limited time to do the hands-on stuff themselves).
I've been out of academia for a few years now, but I think my university took 40%, and the department took some more on top. The total was probably 60% or so.
Indeed you can. It's much harder though. If you have a full-time job unrelated to your research, you need to self-fund a lot of expenses. You need to convince your boss to let you take time off to go to that conference to present your work. You need to foster connections to other researchers through activities like reviewing for journals, etc.
Of course, you can bypass all that and just publish straight to the internet. Personally, I find that to be enormously difficult. The interactions with my peers -- the conversations over beers in the evenings during conferences -- are so useful. Losing those hurts your ability to do good work.
I get that we're all humans and feel better if we converse face to face sometimes. I certainly appreciate people doing research on their own expense and time for the general good.
So now that you are out of academia, do you now occasionally pursue those low-cost research ideas in computer science on your own? I mean, while $5k isn't anything to sneeze at, it is still within the realm of DIY research monetary levels (ie - if I wanted to, I could save up such an amount in a few months from my workaday salary).
So - unless such research no longer interests you (or you have zero free time to devote to such) - it would seem like that since you don't need to kowtow to the academic whims on getting massive amounts of money for research - you should now be able to pursue these questions...
I generally stay current in my research field, but that's a different thing than actually contributing. The free time is one aspect of the problem, but there's also the problem of attending conferences regularly. I could pay the expense with the excess salary that industry pays, but unless you work for a research group, it's unlikely that you can make a workable schedule.
Not to say it couldn't be done. But there are enough hurdles that I end up mostly just not participating much. That's not ideal to me, but it's where my actions show my priorities to be, I guess.
It shouldn't be a surprise given the fact that physics was revolutionized by a patent clerk.
Additionally, having a relaxing job with lots of free time doesn't just free you to explore hard problems, it also promotes creativity. This is because stress is the ultimate creativity killer. Thus these sorts of jobs are really a recipe for deep thinking.
Solutions usually come from people who see in the problem only an interesting puzzle, and whose qualifications would never satisfy a select committee. -- John Gall in Systemantics[1]
A very short read, and quite interesting. It touches on a lot of the same points already made in this thread (people will game metrics, etc). The biggest argument that the book makes that is trying to set up a system to accomplish something will cause the system to do everything but accomplish its goal, which leads to the above quote. You can't set up a system to achieve a goal, only set up an environment that allows the goal to be achieved.
If you're a private enterprise, anything you do can be reasonably related to money (revenue, profit, long term forecast etc.), and we have ample tools to measure performance, align incentives and evaluate risk over the money-abstraction. This works because money flows through the whole process and partakes in every transaction.
Academic pursuits are, monetarily speaking, money sinks. They might be worthwhile investments, but only so far as they deliver patents or attract grants. Academic professionals want to advance the state of the art, and only small parts of that work is applicable to the money-abstraction.
In order to prevent Science(TM) from grinding to a halt, we replace Money with some other measurable so we can apply our existing tools and make risk-assessment before granting money to academics who will (most certainly) spend every last cent of it to indulge their pursuits. We chose measurables in a trade-off between academic freedom + infinite spending and academic constraints + finite spending, because the supply of Science(TM) is practically unbounded.
I'm stating the obvious here in order to ask the question: What is the gosh-darned alternative?
A good solution for universities is to pay researchers a full salary for a 30 hour week. Where 10 hours must be dedicated to teaching and the rest can be used at their leisure. It is a happy arrangement for all parties.
The number of stated hours that a salary is allocated for is irrelevant. Nominally, professors are supposed to be working 40 hours a week. This stops exactly nobody from working between 60 and 80 hours regularly. The problem is cultural, not policy-based.
There's a long tradition of important advances in knowledge being made by people with an undemanding job [1] [2]. The 19th century university became a hotbed of research because the teaching workload was relatively light and they didn't require that you also did research.
The Parson-naturalist list you link to lacks one very prominent scientist-cleric- Gregor Mendel (though I believe this to be deliberate, as he performed research for economic reasons rather than spiritual reasons iirc). The earliest research on genetics took years to complete and was done as a side-project in addition to other monastic duties.
Siddhartha Gautama (the Buddha) was (became; established a whole tradition of being) a hermit/beggar/monk; Socrates was a stone mason (occasionally); Diogenes the Cynic lived in a barrel and begged for food; Jesus was a carpenter and then a wandering preacher; Spinoza (as you note) was a lens grinder; Kant was a tutor; Einstein was a patent clerk. And, as you note, especially in the 19th and 18th centuries, many proto-scientists were people with either an established income or independent wealth, who had the freedom to pursue whatever interested them.
We ought to be finding ways to broaden the number of people who can pursue their interests, while not forfeiting security, dignity, and social respect. This is one of the arguments for universal basic income, that it could decouple (some) people's desire to pursue knowledge from the need to earn a living, and maybe even especially from the need to produce regular, measurable results. Right not we're channeling most of our people who are interested in and capable of developing new knowledge into jobs which demand "results" more than they demand that truth be followed wherever it leads, and however long it takes. This will lead to a great deal of dubious Normal Science [0], and very little revolutionary thought [1].
Note that having a job seems necessary though. Maybe it's a case of the golden mean? On one side you have people whose job absorbs all leisure. On the other you have some who draw sufficient income without having a job at all. These are Marx's infamous "coupon clippers", and historically they're not as prominent in intellectual history as people with undemanding jobs (personally I'm not aware of many philosophers / scientists who were rentiers). If leisure was the only variable, we'd expect people with guaranteed incomes (annuities, for example, have been sold since the Middle Ages) to have been the most prolific thinkers of history.
i think you have to go back to the 18th century for that. i'm thinking britain where science was something acceptable that gentlemen could do in their spare time (with smart+poor doing a lot of heavy lifting) and in particular Antoine Lavoisier in France who literally bought the rights to be a tax farmer so he could do his research. he lost his head in the revolution so there's that. YMMV.
Well, Lavoisier's tax farming was not passive income: "Lavoisier's researches on combustion were carried out in the midst of a very busy schedule of public and private duties, especially in connection with the Ferme Générale." [1] The gentleman scientists were also usually involved in some sort of business, even if it was just administering their estates. The people who truly had a passive income - relying on annuities or government bonds, mostly inherited - are hard to spot in the history of science. They were a pretty numerous class though [2].
I'm not sure that's true. It would be interesting to see some numbers. My naive impression is that there are at least two things needed to make intellectual history: genius, and opportunity to practice your genius. Both are rare, and given that there are far, far more people who need to earn a living than people who don't, I would expect the absolute numbers of (Genius)(NeedsAJobHasAMenialJob) to be greater than (Genius)*(Doesn'tNeedAJob).
My impression is that gentlemen of leisure have, especially in the 19th, 18th, and 17th centuries, been a very disproportionate percentage of intellectual history, and that even the nominal jobs in that group (parson, Lucasian Professor of Mathematics at Cambridge) were wholly orthogonal to whether they produced.
What do you think having an unrelated, undemanding job would contribute to intellectual work?
I don't have a good theory - it's just based on reading biographies. Almost no famous thinkers lived on passive income (although, as I've said, annuities and bonds have been widely available for over 500 years). And probably not any kind of job will do. The most plausible explanation I've read belongs to Richard Feynman:
> I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological. When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous brains and were now given this opportunity to sit in this lovely house by the woods there, with no classes to teach, with no obligations whatsoever. These poor bastards could now sit and think clearly all by themselves, OK? So they don't get any ideas for a while: They have every opportunity to do something, and they are not getting any ideas. I believe that in a situation like this a kind of guilt or depression worms inside of you, and you begin to worry about not getting any ideas. And nothing happens. Still no ideas come. Nothing happens because there's not enough real activity and challenge
The sentiment in general is reasonable to me: that humans are generally more effective and more productive when they have something to do and work on, even when they're not getting anywhere with their main project(s).
It's not at all clear to me that a menial job is a strong solution to that problem. I would expect that in, say, a world with UBI, that also recognized this problem, people would find better ways to keep themselves feeling like they are at least doing something and making some contribution, than working on things they don't care about or believe in. There are plenty of meaningful volunteer opportunities in the world that would be much more effective "make you feel useful" solutions than filing papers in a bureaucratic office.
I could see this working in mathematics and many branches of computer science. How would this work in environments that require large outlays of money to do experiments? Do you just hand untested individuals large sums of money and leave them alone?
I believe letting academic communities in autogestion (as in capacity of workers' self-management) is the way to go. For really large projects you can also include citizens (selected by sortition) and elected politics in addition to the temporarily mandated members of the community in the committee taking the decision. Of course the citizens and politics would have to be trained a bit so the decision process might be long, but these kind of projects are never in a hurry.
This sounds an awful lot like Google's 20% time and the "hack days" done by many other companies. Very few guidelines and no consequences for doing something insufficiently useful, but it ends up with a lot of valuable ideas anyway (and granted, some automated foosball tables).
The one place where the University and the Professor are perfectly aligned is the summer salary. Most professors are paid for only 9 months by the University. They need to pay the remaining portion out of their grant. This gives the Professor and the University a mutual incentive to secure one or more grants. The Professor gets paid and the University gets their overhead tax on the grant.
How different is Thompson's idea from modern academy?
PhD is granted for showing that you know how to do research. Tenure granted for showing that you can do research. Tenured professors are free to research however they like, and get paid as long as they teach.
This isn't an academic issue, the same is true of almost every white collar profession.
To apply standard numerical management techniques you either need to know exactly what a person needs to do as part of the process, or be able to evaluate the output, ideally both, and generally neither are true.
My wife works as a teacher at one of the largest schoolboards in Québec, and she can tell you that the solution "hire good people, pay them the going rate and tell them to do the job to the best of their ability" doesn't work in a unionized system.
Complacency a systemic problem. When your union protects your job, you don't need to actually put effort into doing the job. Of course, that doesn't mean ALL teachers are lazy; but it does mean that a teacher who works hard and gets better results than average is not rewarded for it. It also means that bad teachers cannot be replaced by good ones.
Of course, preventing public school teachers from unionizing will simply allow the government to exploit them. Perhaps the solution would be to replace unions with a professional order, similar to how engineers and lawyers are regulated.
On the contrary, consider team sports as a perfect example of a meritocracy which often also promotes solidarity. A good team, led by a good coach, recognizes merit is earned by the performance of the whole. The whole wins and the whole loses together.
The best of sport very clearly defines success and failure. It uses competition to spur innovation and the pursuit of excellence. Solidarity is often the consequence of that pursuit.
Very little about education, from how teachers are managed to how schools are run, does the same.
For sports one very important difference is that real time performance is monitored by a large number of people. People who do not pay salaries of sports person and do not meddle with working of sports team directly.
For any other kind of job performance is measured by insiders. Outsiders may gauge performance on quarterly / annual basis.
Team sports and the innovation, pursuit of excellence, and solidarity that comes with it are prevalent at all levels - recreational, amateur, and professional.
Probably more abstract than you were hoping, but look at the controversy surrounding public education in the US right now. It's always been a politically 'hot' area and each new administration wants to de-regulate or re-regulate the sector in some new and ever more confusing way. If it wasn't for the unions, I'm sure some administration would come up with the idea of firing the bottom 10% of teachers based on standardized test scores or something similarly misguided.
My criticism of that example isn't that it's abstract, it's that it's purely hypothetical. I'm claiming there's no evidence to back up the assertion that there would be bad effects without public sector unions, so yet another assertion of this does not really change anything.
Fair. I would assert that one would only have to look as far as the parent article to get an indication of where things would go however. As others have said, adding metrics that become targets is bad, and in academia this usually turns into using publishing to determine who gets tenure. In this sense, professors are essentially non-union teachers and their promotions are based on random metrics.
I should also mention that we've already seen some of these effects in public schools with the "teaching to the tests" debate in which students are often subjected to a large number of standardized tests that are used to make funding decisions.
I think the problem is that in the end, any debate over schools at the federal level will always turn into something reminiscent of the gun regulation debate where neither side's argument is founded in reality and as such, the teachers will be the ones to suffer.
It doesn't matter how good the people you hire are, the current educational system is absolutely horrible. A good teacher can salvage some results from the disaster, but what we really need to do is switch to a system that doesn't need superstars to produce results.
It is a sad state of affairs that the thing education teaches most effectively is that learning is a chore.
They probably aren't hiring good people, at least they aren't hiring enough good people. The problem is that there aren't enough "good people" to go around. Challenging PhD programs weed out people who aren't self motivated to succeed. Teacher education programs do not. Most other fields education programs have a hard time doing it.
That advice is basically "just hire great people." No shit. The reason you need management is because that usually isn't possible.
isn't 'get better run unions' an option? Germany for instance has worker representation at the board level. I find it hard to believe that they tolerate members en masse dialing it in.
It would be great to see Nassim Nicholas Taleb's response to this. He's been working on a draft called Skin In The Game.
Overall, I think that the GP is right but seeing that requires us to consider that nebulous ideas like character and cohesive culture need to be at the forefront more than market reasoning.
I think this is somewhat well understood in things like Sports Psychology. You don't tell a receiver to catch X number of throws per game, you just encourage them perform. Likewise, training that more accurately simulates performance is more effective than working toward some metric.
I think this is key. In the sports arena, the metrics are VERY clear: if you don't win games, and your stats are poor, you're fired.
I understand where academics are coming from, but at the same time it seems like many people want to be isolated from any objective assessment of their performance at all. FWIW, I see this a LOT among software developers as well.
Current research indicates that the best way to promote performance is not to encourage someone's current performance, but instead encourage their development of skill mastery.
In the short term, rewards (like money) produce the best results of all, but they also seem to impair learning.
>You don't tell a receiver to catch X number of throws per game
They kind of do though. In the NFL, many contracts are incentive laden with bonuses for (in the case of a wide receiver) X number of catches or Y number of yards.
If the blog post seems anecdotal, there's a great, underused book, 'Measuring and Managing Performance in Organizations' by Robert Austin that covers his CMU PhD work, which came to the same conclusion as the quote attributed to Harford: "If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability." It doesn't hurt to make sure they don't have to worry about where to get, e.g., paper clips, lunch, or health care, so some level of administration is helpful.
Back in 2010 there was an attempt to put Deliverology in place at California State University. It basically means that academia will be measured by targets. A horrible idea that made the California Faculty Association meet and decide on almost mutiny. They even invited John Seddon who wrote several books on the subject of how Deliverylogy twisted and perverted systems in the UK. Fortunately for us, the video of his presentation is available on YouTube at https://youtu.be/8EGMlCau6iU for part 1, and has about 5-6 parts.
If you liked that, then I would highly recommend watching John Seddon's presentation about how settings targets and implementing broken IT systems to enforce those targets are a horrible idea. https://youtu.be/hbNsQFd8DQw
I think one way to 'treat' this would be to acknowledge that certain things just cannot be optimized easily with outside incentives, and to simply accept that there will be some inevitable waste when you let 'good people do their job as best as they can'.
It's ironic that we fret so much about professors that 'retire on the job', but at the same time think it is perfectly fine to steal time from the motivated researchers, by making them playing these silly games too.
Who really thinks the next breakthrough will come from a person that was forced into writing another paper, just to pass some arbitrary performance threshold?
If you consider research as a 'stochastic race', i.e. many people working on a problem with their own intuitions and approaches, it makes no sense to slow down those that are most likely to come up with a good solution (those that are self-motivated). IMHO, it does not really matter if those that are unlikely to contribute much (retired on the job) get a free pass. After all, even to receive tenure takes a lot of effort (and thus, ambition) nowadays, so at least in fields with a competitive job market (i.e. good salaries outside academia) it makes no sense to strive for a tenured position and then not do what you love (i.e., research).
BTW Daniel Lemire's blog has many interesting posts on the more absurd aspects of academia, including research grants (e.g. http://lemire.me/blog/2009/09/15/the-hard-truth-about-resear...). IIRC one of his suggestions was so fund people (for longer terms), not 'projects'. This may not always be possible (e.g., if you need expensive equipment), but in many fields it would make much more sense.
If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.
Absolutely, could not agree more. This is the only way to beat gamification of metrics in my opinion.
There is a worrying trend of treating staff as equally capable but untrusted cogs in a wheel, and metricising performance. I fundamentally believe that no amount of rules or metrics can turn a bad academic into a good one, an unsafe doctor into a safe one or a crooked politician into an honest public servant. We have to calm down with the metrics, accreditations and regulations, and accept that some people are better than others at certain things, hire the right people and trust them to do a good job.
Seems like in IT the spread of different process systems is exactly this. To reduce each job to a set of functions that people with similar levels of training can do the job at the same level of output, interchangeably. So we're left with jobs that are 90% following inane process and 10% producing something useful. All to make sure if we quit, we can be replaced in a few days.
The folks at Toyota talk about two different kinds of process: controlling and supportive.
Controlling is that sort of BS. And I had seen it so much early in my career that I thought that's all there was.
Supportive process is bottom up, done to improve consistency and reliability. E.g., by my door I have checklists for going running and swimming, because I was tired of going out to exercise and realizing I had forgotten something.
I say this because I think process can be amazingly helpful if it's done bottom up, if the workers are the ones who design and improve it. But that only happens if managers have another thing that Toyota talks a lot about: "respect for people" [1]. That's what I think is really missing in these metrics-driven, people-are-cogs schemes.
“When a measure becomes a target, it ceases to be a good measure.”
The measure has to be concrete enough to not be manipulatable.
For example, aircraft mechanical failures or hospital infections are very clear and obvious metrics. And besides outright lying and manipulation, the metrics speak for themselves and can't be corrupted.
You can't, however, measure "education" or "intelligence". You can approximate it and the more you make depend on the results the worse the already bad metrics become.
> ... hospital infections are very clear and obvious metrics.
Are you kidding? There are all sorts of ways for hospitals and doctors to manipulate this stuff, e.g., attract healthier patients (less compromised immune systems) or ones who are wealthier, younger, and more likely to follow instructions; give more prophylactic antibiotics; classify more patients as palliative http://news.nationalpost.com/news/canada/canadian-hospitals-... ; slant cause-of-death classification when there is ambiguity.
I remember an article here about how hard it is to gather metrics on MSRA-related deaths precisely because they are often not mentioned in the cause-of-death reports. I think this was it:
What people misunderstand about these metric-based systems in the US is that their purpose is not to reform systems or improve performance. It's to create a layer of interference that either channels control into a new managerial class or to wrest it back into the hands of higher-level administrator from whom it was somehow ceded.
The purpose of business consultants is to make money for business consultants. Pushing for standardized testing regimes is to pursue political goals and profit (for-profit charters, low-quality online education, re-segregation of schools). It's the same in academia. The question is always, "cui bono?"
I'm fairly anti-capitalist, and even I can see this is a bit of a stretch. The way managers are taught is that you can't improve upon a process unless you know exactly what to improve. They look for something definite like metrics because a change in those can actually mean something and it's more definite than "they're doing good work." The drive to use metrics actually kind of revolutionized how we improve work. The problem now is that managers use it a bit too much and use the wrong metrics sometimes.
I don't disagree that in some contexts, metric-based management and reform can be well-intentioned/beneficial (many Japanese companies use these tools to good effect).
But I don't think this true in the context of the modern US economy(why I qualified my statement as such). The political/economic situation is such that the efficacy of a given model is secondary to the viability of abstractions that make reference to it. How it performs in the meta-context of "salesmanship," re-inforcing acceptable political narratives, and social media, supercede its actual utility. The implosive (and false) model of finance, where the derivative of some object or system possesses more value than the thing which it references, has pervaded every aspect of our economy and culture.
There is a more subtle wrinkle to it. One of the reasons managerial culture in the US leans so heavily on 'objective' metrics is because we have a history of racial discrimination that rewards people from certain demographic backgrounds while punishing others.
Charitably, metrics are supposed to be a way to introduce objectivity and minimize implicit/unacknowledged biases in decision making, especially where performance is concerned. Even less charitably, it gives companies and managers cover for discriminatory practices by giving them objective standards to point at as justification for their decisions.
We'd all probably be much more comfortable with a less metric dependent world if we didn't also have the nagging feeling that it would lead to the same-old types of people scratching each others' backs.
I don't think using objective metrics is racist, which is what we were talking about. If you have any data that proves me otherwise, I would appreciate it very much. It would be unexpected to me.
I think you might have misunderstood my point. I said our management culture leans on metrics so heavily as a way to mitigate the effects of racism (and sexism) in managerial decision making. Not that the metrics are racist.
It's harder for us to rely on systems that leave more room for personal judgement without oversight because we worry that decisions not attached to hard metrics are more likely to be subject to invidious biases. The charitable take is that the drive for metrics is motivated by a sincere attempt to mitigate these biases. The more cynical take I offered was saying that people use it as a way to cover their tracks against charges of discrimination.
> There is a worrying trend of treating staff as equally capable but untrusted cogs in a wheel, and metricising performance.
That's a great way of summing up what I've heard called "managerialism".
There's a theory behind, e.g., the MBA that a competent manager can manage anything. That what one manages makes no difference as long as you're a real manager. One of the corollaries of that theory is that all one has to do is find the right short-term metrics and make them go up and to the right.
I don't think this theory makes much sense when you examine it. But it's excellent at justifying a caste system, one where all workers are controlled by wise managers, and where there's a stable hierarchy of executives above that. I note that the hierarchy established is basically equivalent to those of feudalism. It also has the handy property of justifying high compensation for managers and executives; since workers are cogs and higher-ups do the important thinking, all gains should flow upward.
There are other ways of organizing things, of course. But this has become such the default that people have a hard time even imagining alternatives.
I tend to find people with your view have never been an executive managing the managers, or even a manager. When you are (like myself) then you see that good managers / leadership (including tech team leads, etc.) really do make a massive difference in performance with the same people.
Things like metrics, values, and re-orgs might seem like Dilbert-esque corporate speak (a view I used to share), but they're incredibly important once you're trying to get a bunch of people to do something with excellence and speed.
I tend to find people with your view have never been a cog, or even a worker. When you are (like myself) then you see that good managers / leadership (including tech team leads, etc.) really do nothing at all, and only worry about making a massive difference in performance with the executive managers.
Things like metrics, values, and re-orgs might seem like Dilbert-esque corporate speak (a view I used to share), but they're incredibly important once you're trying to believe you're getting a bunch of people to do something with excellence and speed.
Sure there are plenty of managers who do a good job but there are also plenty of useless managers who do more harm than good. Especially in complex jobs like academia or software engineering it's pretty easy to make up some metrics that look good but in reality do more harm than good. And somehow the sample people that created this system are then paid to fix it.
I think this argument against "rote learning" is basically defunct. The bottom line is unless we want to live socialistalically and force roles on children like in Germany we need to teach ALL the stuff to ALL the children, and the only way to do that is to blast through the material with tons of homework, rote learning, after school tutoring, standard testing, etc.
So it is a choice we make, tell everyone what to do from an early age and allow them to focus, or try and teach all the things to everyone, using standard testing to make sure everyone is at least keeping their head above water.
I am also a manager of managers. I agree 100% that managers/leadership makes a big difference in performance.
But I don't see the difference being systems and metrics. I see it much more as interpersonal relationships, respect, and caring about the people that work for you. The manager's competence in the domain itself is also important.
Selecting the right person for the job is also vital, but you are specifically addressing different performance by the same team with a different manager.
That's great, but I'd love to hear what your manager says about your recent performance. Is there any way I can sit down with him or her and maybe come up with a game plan of how we can improve their performance in helping you meet your managerial goals?
From ground level, though, it's really easy to see the dysfunctional cultures growing up around metrics. At the factory where I used to work, we'd zig-zag from metric to metric. Yield one quarter, Cpk the next, scrap rate the quarter after that. Special cross-departmental teams to manage the metric. Regardless of which metric was chosen, people would do really dumb things to try to meet it. If scrap rate was the metric, we'd send terrible material through the line to keep the scrap rate low. If yield was the metric, we'd scrap anything that was even slightly abnormal. Cpk was the worst (or maybe the best), because we just adjusted our spec limits until Cpk was acceptable.
The better managers translated the quantitative metrics into qualitative guidance for their teams. The worse managers provided no such buffer and instead made their teams achieve metrics by any means necessary.
> The better managers translated the quantitative metrics into qualitative guidance for their teams. The worse managers provided no such buffer and instead made their teams achieve metrics by any means necessary.
This seems like a key insight.
Metrics are key to good management – but metrics are for the managers, not for the teams. Managers that pass metrics directly down to their teams as goals aren't doing their jobs; they're trying to make their team do their job.
Agreed. What I left out is that being a good manager can be really hard. If this quarter, the metric is scrap, and you've already exceeded your scrap budget, you have to be willing to tell your people to scrap bad material anyway. You have to do that, knowing that you're going to get raked over the coals in the managers' meeting. Knowing that it's going to count against you at review time. Knowing that the people you manage mostly don't understand that you're all that stands between them and the torrent of abuse. Middle management is a hard job.
I sense a "its hard but its good" theme in this post, and frankly, I disagree.
Making middle managers accountable for the failure of executives to manage their company, effectively just insulates those people who made decisions from those who know they are terrible decisions.
I was a middle manager who had to do this very thing, and even though my company was happy due to making gobs of money, every review I had was terrible because I wasn't hitting the metric of the month (that generally ended in horrible burn out and hiring even more replaceable people) and I was constantly warned that "You wont bonus this month."
I never got a bonus while I worked there, and yet was highly respected and valued by all my employees.
When I left the company the other great managers followed me, and the business folded, having hundreds of people lose their jobs.
So tell me, why would anyone want this or claim that this type of decision making is "good management"?
Why setup a system that punishes those that help the business and deliver actual value to customers instead of meaningless metrics?
I would be more inclined to believe this if the pro-manager view didn't come from the managers the vast majority of the time. It seems too convenient and self-service. Of course managers want to think they make a massive difference.
Certainly some aspects can: keeping politics away from your group so that they can work, focusing work tasks by eliminating extraneous ones, advocating for people in the company. However, once you start getting into things like values and metrics it starts becoming a lot more grey. On the one hand metrics are a great idea, a tiny step towards making management more science driven. But on the other, the metrics so often used are such a poor and overly simple representation of a very complex system that it's often hard to see how they provide an accurate view of any given work place.
Goals and values are often completely lame. We value hard work! Your goal is to do your job! Somehow managers must know that they are ridiculed for this. After all entire movies are made doing so. Yet they still seem to want to believe this rhetoric works.
Sometimes companies are forced into doing metrics. If you need to fire someone for poor performance, in order to avoid a lawsuit, you'll need to document it, and that requires metrics. The same goes for lawsuits over who gets paid how much.
As long as there are regulations and lawsuits over this sort of stuff, there will be metrics.
I believe "The Zen of Python" is one of the reasons for Python's success. I also believe that most "goals and values" that CEOs obsess over are bogus, but there's a kernel of truth that setting the philosophy for the company has a subtle, but strong effect.
Your skepticism based on who's speaking is warranted. After all, you shouldn't ask your barber if you need a haircut. Yet, maybe your hair stylist has the most expertise. Should you trust your dentist on how often you should get your teeth cleaned?
This makes two big assumptions that can't be overlooked:
1. People can't self-manage (to some degree)
2. Performance is the end goal of management
I think both of these are wrong, or at least unfounded. I work in a more-or-less self-managed environment, and performance is actually pretty good. I get to pursue about 50% of my work independently and the other half is either collaborative and in line with a coworker's needs, or is one of those "must-dos". I don't see a possible world where I'd get more work done over the average day if there were artificial metrics like "time spent at desk", "lines of code written" or "number of tasks completed" dangled over my head - I'd probably just fuck off more and come up with clever schemes to break the game. This problem can be seen in academia as well - give a bunch of smart people a matrix of rules and yell "GO!". Watch them optimize.
I'd also argue that if it's only performance you're looking for, you're both ethically and intellectually off base. Good managers do increase performance/productivity - bad managers do not. But to assume that, as with teachers, academics, or software engineers, you can measure and reward behaviors without causing adverse affects on your workplace then you're dreaming. My manager is good because he backs off when needed, leaving me and a coworker to argue about an implementation design. My manager is good because he sets up an area of focus and says "ideas, go!". My manager is good because he will let people take a week or month to pursue a new solution that he personally doesn't understand, but trusts them to solve it aptly. My manager is not good because his team excels in certain metrics - his team excels in certain metrics because he is good. And last, my manager is good because productivity is not the end goal - employee happiness is. Happy employees do good work, and more importantly, they're happy for their own sake, which is much better than being surrounded by burnt out metric junkies.
So maybe my takeaway is that if we're going to do any kind of metric gathering it should probably be sub-perceptible and not be treated as a golden calf. If my CEO wants to devise a clever way of using me as a science experiment in pursuit of better management practices then I encourage that. But to tell people "here's what we're tracking now go off and do good work instead of that" is really fraught with problems.
It seems like you're talking from a viewpoint that doesn't understand the point of the article. There is no manager I've ever encountered in a university who has been of any value to the academics they've managed or produced any increase in productivity or excellence with their ridiculous metrics.
Hello! I have been both, as well as a founder. I also have been consulting on organizational process since before you started high school, so maybe lay off the condescension a bit.
I'm not saying good management is unnecessary. I'm saying that managerialism is basically the opposite of good management. Metrics can be valuable if used very carefully. But they so rarely are, because a lot of the drive for metrics is really is in service of a cultivated shallowness. Simple metrics allow somebody to swoop in, jerk some levers, provide apparent gain, and then jet off to something else, never mind the consequences.
Things like metrics, values, and re-orgs might seem like Dilbert-esque corporate speak (a view I used to share), but they're incredibly important once you're trying to get a bunch of people to do something with excellence and speed.
The need for this sometimes seems to be, itself, an aspect of managerialism. Sure, if you're going to the moon on a tight timescale you probably do need to marshal a larger number of people than can usefully self-organise in the available time. But most projects are much smaller scale. A feature of managerialialist organisations is that there's rarely any consideration of the possibility that a give project could be done by a self-organised group (let alone a single smart individual). Everything starts with managers assigning a "team", which then inevitably ends up requiring management.
Yeah, but it makes a huge difference whether the managers have domain knowledge or are just dictating based on statistics. In the latter case, their expectations will be unrealistic and they won't be able to process the input from below effectively because they've already agreed to implement imperatives from above.
Professional managers can of course learn about different jobs, but there is a massive and largely unpleasant difference between being manged by someone who's been promoted for their domain competence and someone who has been parachuted in because of their expertise in ordering people about. Since it's virtually impossible for junior workers to fire a manager they dislike then what appears to be efficiency is often achieved through exploitation. To know this and perpetuate that pattern is to disavow moral agency, converting what is nominally a cooperative enterprise into a zero-sum game.
I can almost guarantee the managers improved nothing other than reporting of work. Before they arrived everyone was too busy doing it to tell execs what was happening. If anything I bet tangible productivity is reduced with the introduction of management structure and more detailed reports.
This may not be true. Let's take the case of Stephen Smale. An incredibly smart man who would go on to win a Fields Medal. And yet, without someone providing the structure within which to work and without an external motivation, a mildly mediocre graduate student.
All the credit for his work goes to him, of course. But one can easily imagine him completely flaming out with a different department head and advisor.
It is precisely because people are not cogs that managers can either be a force multiplier or an unmitigated disaster.
EDIT: on rereading, that guy isn't saying all management is like this but that his hypothesis explains certain managers. All right. That makes my comment a non-sequitur.
But that was the result were two academics recognizing a diamond in the rough. An MBA-type concerned with metrics would never even have accepted him for graduate studies.
exactly. the managerial class claims dominion over the rest of us, and then swiftly constructs a rigid social system that locks in their position on top of it. this necessarily includes promulgation of unfounded myths and beliefs taken on faith to support the ossification of the system.
My friend and I have internally renamed the roles "producers", "regulators", and "harvesters" in an attempt to remove the negative connotations. Feel free to use that in your own discussions also.
The ideas can be negative, yet still exaggeratedly negative due to a negative name.. Renaming to a less aggressive phrasing will make them easier to buy into and does not require attempting to distort reality.
> I don't think this theory makes much sense when you examine it. But it's excellent at justifying a caste system, one where all workers are controlled by wise managers, and where there's a stable hierarchy of executives above that. I note that the hierarchy established is basically equivalent to those of feudalism. It also has the handy property of justifying high compensation for managers and executives; since workers are cogs and higher-ups do the important thinking, all gains should flow upward.
Let's not throw the baby out with the bathwater. I agree that metricising performance is only as good as your metrics, and in my personal opinion, metricising knowledge work is a fool's errand (too many variables to accurately capture performance). That is _separate_ from having managers though. Managers can often triage work effectively, help engineers unblock their work, help identify people in the org that contain the domain knowledge necessary to help one of their direct reports, etc etc.
I think the whole urge to track velocity and points and stupid things like that to be absolutely dumb, though. Also I'm an IC and am coming up to the point in my career where I have to figure out a way to not be pushed into management.
I'm not opposed to managers. I am one, after all. But I don't think "management" is a universal skill. I think somebody who manages developers should probably be a developer. I think somebody who manages doctors should be a doctor. And I think there's a lot of merit in non-hierarchical systems, like the way doctors and lawyers are mainly accountable to professional organizations.
I agree that metricising knowledge work is a mistake. But I think it's also true for other kinds of work. In colleges I worked in a factory and there was no end of trouble that came from managing via metrics.
> I think somebody who manages doctors should be a doctor.
i could be wrong, but i believe in 'real' professions like medicine and law, there's some kind of board/bar rule that prohibits being managed by someone who never had the actual job they're managing. any one know if that's accurate?
For a professional engineer's license, you have to demonstrate competence and have 4 letters of recommendation from PEs who have either managed you directly, or have overseen your work. So there is not a requirement that you (as a budding prospective PE) may never work for someone who isn't a PE, however it does limit your ability to achieve your own license. Therefore, there is a desire from the younger engineers to seek out PEs to work under.
I find physicians like the idea of working for doctors, and hospitals have shrewdly supported the role of 'chief medical officer", while keeping financial controls with the other C level officers. The Children's Hospital in San Diego is famously "Rady's" because the physicians ran it into the ground and the Rafy family bailed it out on condition they accept professional management practices. Not coincidentally, the Rady family also funded the the UCSD Rady School of Management. Also not coincidentally, many health care workers loathe working there, generally attributing this to a belief that management is more interested in the bottom line than patient outcomes, and this leads to a general distrust of Management.
Agreed. The big problem in academia is that when you don't trust academics you need administration to provide oversight. Administration is now a huge portion of the cost of university education. Not research, admin.
To be fair, academics can also be childish, vindictive, sexist. There is also a real problem of hiring one's friends and students. BUT inter-organizational competition helps to some extent with these problems. Regardless, they justify administration and associated costs.
Managers, CEOs, police officers, border agents, and presidents can also be childish, vindictive, and sexist - but they're rarely subject to similar oversight.
There's something disturbingly paternal about the idea that unusually clever and/or creative people can't be trusted to act like adults and therefore need adult supervision from people with authority over them.
Seems you both agree but are talking about modern management practices vs the concept of having someone "manage". The workplace class of "Management" is much more than just the act of managing you describe.
What you describe is how I've seen software dev management properly work, managers are a support structure for the workers, like Admin. They have important and useful skills, but don't get the position of metaphorical aristocracy.
When you have MBA style dated general management they operate by putting the devs subordinate to their management "expertise" and then they don't "triage work effectively, help engineers unblock their work, help identify people in the org that contain the domain knowledge necessary to help one of their direct reports, etc etc." because it's not really in their interest to do so.
It's not just incentives either, there is a huge cultural component, which is why it's so easy to parody "tech management" and have the caricature be recognizable around the world.
Yes, I was going to say follow the money - the administrators get paid well, and yet by the same logic applied to the academics, the administrators should be paid a pittance, because there's no shortage of administrators.
They'd probably argue that you have to pay well for a good administrator. How do you measure a good administrator? By how much money they save. So the less the academics are paid, the more the administrators save and so therefore the more they are paid - and we're there.
It used to be academics took turns at administrating for a couple of years, this was before the rise of the professional manager, and they insinuated themselves everywhere.
Bottom line: Taylorism only works well when applied to work that can be throughly quantified into time and motion studies. When you try to apply it to disciplines that require reflection and analysis or are by definition speculative in nature (can't think of anything more speculative than research or learning new things about the world that is academia), it's a catastrophe.
Video: The surprising truth about what motivates us
There's another aspect you should also mention embedded in the final clause which is often glossed over in judging the merits of education: work quality should be measured in performance of the actor, not in the results of the acted upon.
High quality pedagogy and quality scholarship are not necessarily correlated with who shows up to class or purchases books and journals. It's a separate part of the pipeline with different incentives.
> There is a worrying trend of treating staff as equally capable but untrusted cogs in a wheel, and metricising performance
And not just in academia. Technology has allowed the collection of metrics and data to a degree unheard of previously, and it takes wisdom and restraint to know when they do more harm than good.
Unfortunately, I think the idea of collecting metrics and doing data-driven evaluation of people's performance has being one of those "No one ever got fired for buying IBM" things. It gets done unthinkingly because it's safe "conventional wisdom" regardless if it's the right call or not.
For the sake of discussion, do you feel measurement is a way of reducing politics and discrimination, not just creating cogs? So building to that argument:
1. Right now academia is both absurdly political and largely government funded.
2. Since you can't pay people tax money to do nothing, you need managers.
3. Academics in management roles also have their own reputations to defend or their own biases.
4. That leaves academia open for political manipulation and bias. This could have the result of limiting innovation (publish what you need to get tenure) or even limiting jobs for underrepresented groups (women in science and engineering, for example).
I don't have the answer here, but it does occur to me that objective measures can be a good thing for many people.
I'm pretty sure this was how Bell Labs operated. iirc, the only metric the management there expected was a written report at the end of the year detailing what you'd worked on over the course of an entire year.
But that whole strategy is based on hiring "good people," and by what metric do you determine that? It seems that it's just pushing the measurements to another location, one that's less visible and transparent (the hiring process). Or maybe you don't think that there should be a firm set of metrics at all, but having metrics that are too loose ends up meaning the decisions tend to be arbitrary and based on the whims of whoever is currently making the decisions.
I think most managers would enjoy hiring the best people and then having greatness occur. But the reason these things are in place is because you're not always going to be hiring the best people (or someone might be great in some categories, but terrible in others you didn't know). Saying "hire good people" isn't much of a solution.
There doesn't need to be a metric for that, it's the manager's job to use their judgement to assess that by building a relationship with the person, seeing how well they communicate their ideas (a sign someone understands things), that they finish what they said they would do etc etc.
It's not exactly circular, since there are many less managers than low level workers, and progressively less people as you go higher. Presumably the people at the top were once successful managers or successful workers.
Basically, this promotes a system where metrics are given by how your peers and managers feel about you.
Perhaps while our metrics aren't perfectly capable of predicting how good a worker will turn out or judging it's progress, we shouldn't rely too much on them in place of peer judgement.
But indeed let's not fool ourselves that coworker judgement is perfect. It needlessly introduces a whole body of biases, unnecessary politics, and social engineering that can be as bad if not worse than a metric-only system, which at least are objective.
As described here, "communication ability" and "timeliness" are not metrics -- they are just things that someone wants. They are no more a metric than "tastiness". In the absence of a well-defined measurement procedure and the intent to carry it out, one can not really be said to have a metric.
Measuring the hiring process is a different kettle of fish, to measuring on the job.
I think being overly measured in a job makes a job about maximizing the metrics at any cost. And a lot of people will cave into that pressure and take shortcuts in areas that are not measured in order to maximise their measured performance.
For example over estimating, less testing, less documentation if the metrics is about meeting estimates.
Whereas without the metrics they might just do what they think is best given their experience in any given situation.
There is never going to be zero metrics, as managers need to know where the money and time is being spent, but I prefer things on the lighter side.
I think a metric is an objective, formally and specifically defined measurement. So get people to do their research and use subjective measurements, i.e. personal judgement. It may be a bad option but it is an option, and we don't know whether it'll be bad or not until we test it.
>no amount of rules or metrics can turn a bad academic into a good one...
I believe you've mistaken these systems as being for reform purposes, rather than existing as a way to weed out bad actors who want to enjoy the substantial benefits of tenure or a large salary without doing the work which justifies them.
When you reject metrics, the insidious thing is that you lack tools to catch bad actors who, due to oft-present subjectives like better than average lying ability, being liked by their team despite being largely ineffective, and exploitation of people's desire to not create conflict, the organization will never hear about them until long after the damage is done, they've made off like bandits, and you can no longer fix what they've done, since you didn't know when it happened.
Metrics cut through these specific subjectives and are therefore effectively necessary.
You can be a dirty academic, but published paper number metrics mean fakers now need to be, minimum, good enough to fake doing the job of a real academic well enough to publish, which means at least theoretically they could reform. Code reviews, pull requests, and similar programming measures do the same, leaving a clear and auditable point at which failure can be caught and later reversed. They further make it necessary to be good enough at programming to not only survive the interview, but also fake a long-term progression of source code which means to cheat you at least need to know how to program well enough to get away with it, which is within spitting distance of actually coding something useful anyways.
Admittedly, I don't think metrics create good incentives a lot of the time, but if you're thinking about the very long term and want to prevent your company's bottom from solely being a feudally structured nepotism contest, metrics are the only way to do so.
In theory this is always the right thing to do no matter what industry you are in. Do you want to build a good software organization then hire smart people and get out of their way but in practice it is basically impossible.
First to know what smart looks like you have to be smart yourself before you can hire more smart people. Second you have to be really cognizant to not hire more people like yourself because not all smart looks the same way. Third whatever position you're hiring for you need to understand well enough to recognize the same kind of skill in others but this kinda conflicts with the second point because expertise always introduces bias. Fourth you have to be ruthless when it comes to firing people because every so often you will make a mistake and hire a bozo that looked smart and people really don't like firing people.
Basically all the cards are stacked against you and what I've observed in most software organizations is that you'll be lucky if you manage to make 10% of your workforce satisfy the "smart" metric.
We have to calm down with the metrics, accreditations and regulations, and accept that some people are better than others at certain things, hire the right people and trust them to do a good job.
Honest question - how do we know who is better at what if not by some measure/metric? How do we know who the right people are? Is it just a matter of 'expert opinion'? And if so, how do we prevent cliques and nepotism?
We know who is better the same way we know how to set the metric. There is no escaping "values" :)
Metrics do complicate cliques and nepotism; but they can also be used to reinforce them. Rarely is performance as clear cut as speed in a race. Metrics that boil down to approval from coworkers -- how helpful was this person? how professional (1-5) was their communication? -- often do more to conceal bias than prevent it.
The same thing goes on with government. Every time something goes wrong, there's a call for more regulation and more oversight by the government, often far out of proportion to what actually went wrong. People believe that sufficient regulation of minutia leads to utopia.
YES, and here you have the MBA phenomenon striking again. Reducing things to over-simplified metrics, thereby ruining it all, or betting the entire economy on a theory that "manufacturing is low value and should be offshored to the lowest possible cost provider". In this case, driving down the effectiveness of teaching by adding metrics.
“Tell me how you measure me and I will tell you how I will behave. If you measure me in an illogical way… do not complain about illogical behavior…” - Dr. Eliyahu M. Goldratt
That list is correct, but it's only half of the problem. The other other half of the problem is supply.
In fields like mathematics, physics, computer science, grad students have excellent non-academic options. There is an oversupply of PhD students with respect to the number of available positions, but the excess PhD students have fantastic career avenues.
In fields like the humanities, and, to a lesser extent, the molecular biology/ecology, there is a huge excess of PhD students, and there is a very limited number of career options outside of research (for bio, there is pharma, for everything else, eh). This is compounded by the fact that for professors, postdocs are an absolute steal (http://www.sciencemag.org/careers/2017/01/price-doing-postdo...). Professors get to pay 50-55k (and that's on the high very high end) a year for a PhD trained researcher who will often work themselves to the bone in the hope of getting the publications that will allow them to become a professor. Thus - the very people who have the power to change the system instead greatly benefit from the status quo.
The real reform that has to take place is that PhD programs need to become way more selective. Not every PhD should have a guaranteed spot as a professor (that would be insane) but when less than 8% of postdocs become professors, the system is in crisis.
> Not every PhD should have a guaranteed spot as a professor (that would be insane) but when less than 8% of postdocs become professors, the system is in crisis.
There are lots of industries like this, e.g., corporate law, sports, acting, music. Are they all "in crisis"?
And in any case, if your only criticism of a field is a characteristic that is shared by many fields, then you should either assert that your preferred fix is general to all of them, or explain why it's specific to academia. (For instance, it is not enough so say that medicine should be socialized because of imperfect information alone since imperfect information exists in many other industries and we do not socialize them all.)
"in crisis" is the end state of a pathology, where it has gone untreated for far too long and begun to have ruinous consequences. Academy is in crisis. Other fields suffering from the same pathology are at different points in their breakdown. Some of them will never breakdown, others will enter their own crises sometime soon if they haven't already (and many of them already have).
surely I'm allowed to criticize and highlight social issues without having a "preferred fix" that I'm promoting. I'm not nearly so arrogant as to think I even know what fix would work.
there is a crisis in our society though, and it has many social pathologies underlying it, and it manifests variously in disparate fields that yet suffer from similar pathologies. in the case of academia I feel that it is acutely important to address because of the role the academy plays in shaping elite culture and practices. in the modern world we've elevated the academy to the place that used to belong to Bishops and church scholars. The shaping of beliefs and all of the cascading effects that has on society is now in a state of breakdown and malfunction.
No, things can be pathological indefinitely. I'm not disputing pathology. Rather, no one has justified "crisis", i.e., some reason to think there will be abrupt change, or will not continue to produce as expected.
> surely I'm allowed to criticize and highlight social issues without having a "preferred fix"
I'm not saying that, I'm saying that it doesn't make sense to discuss academia specifically if one's comments apply to a million other fields. This applies to fixes or to just analysis of the pathology.
Law is in crisis, the other three are all so (relatively speaking) small, unimportant, and gated by the genetic lottery that they're not worth discussing.
I could keep coming up with examples, both current and historical, because this is a general phenomenon. Anytime there is a large payoff for a small number of slots, along with an inability to instantly measure top performers, a filtering heirarchy is naturally constructed. Military officers (94% of commissioned officers do not retire with the military http://www.skyhinews.com/news/hamilton-military-retirement-8...), tech start-ups (duh), organized crime, trade guilds, investment banking etc., etc.
You can think such fields are a bad bet for newcomers, should be regulated, and so on, but it doesn't make sense to say that they are "in crisis". They can last for arbitrarily long times, and generally produce what they are expected to produce!
There's also little reason to think that academics and law aren't just as much determined by genetics as musics, sports, and acting.
Perhaps what is best for someone isn't what is best for the field as a whole. The field itself may be fine even if the individuals in seem to be struggling from a high level.
For example in sports, if people weren't struggling to get into the NFL, it would probably be much less fun to watch (for those who watch it now) because the skill level would be much lower.
Could this be the same in academia? We have a very competitive funnel that only lets the best into the top spots? If that is the case the field isn't struggling, the people are. The solution wouldn't be to make things less competitive, it would be to have alternatives for people who don't make it into one of the coveted positions.
This is something I think a lot of specialized fields are struggling with. Over time (looking at century timescales) people have become increasingly specialized due to the nature of the economy. Specialization makes switching expensive. I think this is something we will have to grapple with as specialization continues.
With the exception of performances (sports, acting, music) you can make a decent living despite not being at the next level. It is generally not so with academia.
And even with the stuff like acting and music, the advancemnet of society doesn't depend on the efforts of professional athletes and performers. So we tend to be a lot more tolerant of people falling out of that pipeline and winding up in more prosaic positions and honing their craft in community theater or open mic nights or whatever while making ends meet some other way. With research it's not the sort of thing you can do on the side after a day spent waiting tables.
> With research it's not the sort of thing you can do on the side after a day spent waiting tables.
Depends on your level of engagement. Can you explore your own hypotheses and put together your own experiments? Probably not.
Can you engage in citizen science projects like BOINC or Zooniverse (essentially performing drudge work that would otherwise be thrown at some hapless research assistant at the bottom of the totem pole)? Of course!
My argument for why you couldn't is more one of pragmatism than anything else. After 8 hours slogging away in a cubicle, you're unlikely to have the energy to work with the laser-like precision necessary to tackle a scientific problem. Perhaps it is possible in some exceptional cases, but for the vast majority of people I don't see it happening.
Plus, science is as much about your interactions with your peers as it is your work. Individuals that aren't highly embedded in a scientific clique are liable to miss important information that would have an impact on your research. You can't develop the same level of social capital as you'd need doing science part-time as a hobby.
For those who are potentially offended, I wasn't insulting citizen science projects (I, in fact, participate in some of them)- having actually worked as a hapless research assistant at the bottom of the totem pole, I was actually just making the blunt observation that qualitatively a lot of the work one would perform in citizen science project is indistinguishable from necessary, but particularly dull, work that higher level investigators don't feel like doing.
That's definitely a problem, but a fairly disjoint one. The fields with the worst oversupply problem oddly enough largely don't have the problem of this kind of metrics-based research micromanagement. It's hard to get a philosophy professorship, but when you do, nobody cares about your h-index. That's partly because there's less (almost no) grant money, so universities don't care that much about research metrics in those areas.
Biology is at the intersection of those fields. There is an oversupply problem, and it is definitely the worst offender for metrics-based research micromanagenent.
The system is managed perfectly to minimise agendas being undermined. Those in power have worked their entire lives to gain it and will have no issue papering over any accusation that veracity is not their sole priority, let alone a complete farce. "In many scientific fields, results are often difficult to reproduce accurately, being obscured by noise, artefacts, and other extraneous data. That means that even if a scientist does falsify data, they can expect to get away with it – or at least claim innocence if their results conflict with others in the same field. There are no "scientific police" who are trained to fight scientific crimes; all investigations are made by experts in science but amateurs in dealing with criminals. It is relatively easy to cheat although difficult to know exactly how many scientists fabricate data" https://en.wikipedia.org/wiki/Scientific_misconduct
I had a similar problem with hiring product designers. How do you get & measure great design? What I ultimately decided was, the only way to get great design, is by hiring great designers, and only other great designers recognize who's very promising vs. who's not, with some degree of success.
Research is the same. However, due to the inherent entanglement of historical biases with merit, it became very fraught in the U.S. to rely on expert judgment to pick new professors, every school clamoring to find some objective metrics to avoid charges of discrimination.
This is coupled with the fact that most departments are filled with mediocre people who are not particularly qualified to pick promising young scientists to begin with.
So ironically, discrimination is still there & at times worse, but now there is metricized cover for this bias. "We found that tall white dude X has more publications than Y".
Meanwhile, academia right now is almost irrevocably filled with pedantic scientists, drip-feeding papers with maximal verbiage at a steady pace.
I would suggest to teach and emphasize the role of science and technology for our future. Social esteem for the role of the scientist and technologist is important to nurture scientific talent. Reading Paul Drucker book about how America respect for the technological man and British respect for science man explains why America took the lead in technology. For young people motivation to solve some very important problem by means of science and technology and giving them freedom, good money and respect is the right combination to make the any field advance.
There a a lot of comments here and the take-away for me is that managing academia is a very difficult task and nobody knows how to do it. Some people recognize that we could make more harm than good by adopting short-sight metrics, I consider fortunate that people recognize that sometimes we don't know all the answers and that the best we can do is to share information about this complex task.
Since here many things are about start-ups, what about a start-up for developing a better academia, a way for great people to expand the frontiers of knowledge?
Tim harford Ted talk about error and evolution is about how the only way to manage complex system is about trial and error, perhaps he would subscribe that trial and error is the only way to search for a better way of managing academia.
I think we need to rethink what we consider academically important which should be taught at universities and then what should be taught as self studies.
I can't help but thinking that one of the real problems is that a lot of academia isn't that valuable anymore as a field of study.
Don't get me wrong I consider ex. philosophy as one of the most important fields but I also feel like it doesn't belong as an indiependent discipline anymore. Maybe I am wrong but after postmodernism I really don't see what new revelations are going to be of fundamental value to society.
I see plenty of need for applying philosophy in various fields as interpretation of those fields such as neuroscience, quantum mechanics, technology etc. but as a stand alone field I think it gave us what we needed. And even if I am wrong you don't need an academic faculty to support it anymore.
I feel the same with something like psychology which to me is even more problematic as it it's based on a premise that you can learn about the human mind by studying humans. Again it should be tied to something like Neuroscience instead.
That way the fields are put to good use rather than being self servant.
I am aware this might not be a popular opinion and it's not something that I am 100% certain of myself but it seems to be that a lot of academia could be learned without large institutions to support and and thus with the need to measure it the only way large institutions normally can.
It's really easy to make punchy-sounding arguments about why a certain field isn't as important to teach, so we should be rather reluctant to go along with them.
For example your argument about psychology could equally applied to almost any field and physics. "I feel the same way about (Computer Science|Biology|Chemistry) which to me is even more problematic as it is based on the premise that we can learn about (computers|life|chemicals) by studying (code|proteins|chemicals). Again it should be tied to something like quantum physics instead." In fact this argument makes no sense to me, why wouldn't studying human behaviour be the best way to learn about human behaviour?
Same for the argument about philosophy, it's too easy to assume something valuable a field produces is too amazing to ever be topped. "After [the light bulb] I really don't see what new revelations [about electricity] are going to be of fundamental value to society."
I know which is why I said it with some reluctance. But your rubutals aren't good examples.
Physics, Computer Science and Chemistry still have many still unanswered questions and provide clear benefits and ways to uncover those.
In psychology we have a much better way into the human mind through neuroscience. Asking humans about their own behavior is filled with the very bias that we try to battle through the scientific method.
Comparing philosophy with the lightbulb doesn't make much sense as the value of the lightbulb doesn't need to be peer reviewed to show it's value.
The problem with many of these old fields is that they have become self-referential. Their contribution to society the last many decades aren't warrenting the popularity of them.
We aren't studying alchemy anymore either because we have found more and better ways to study chemistry all I am saying is that perhaps it's time to park some of these fields in the alchemy department.
Edit: Ironic that someone disagreeing with me on this very important discussion didn't find much more argument than a downvote :) Oh well.
Why do you think neuroscience is a better way to learn about human behaviour (which is what psychology deals with)? Neuroscience is incredibly difficult and the brain is super complex. We have learned far far less about human behaviour through neuroscience than through psychology studies.
Learning about behaviour through neuroscience is working at the wrong level of abstraction, we're not capable of comprehending large systems at low levels.
You cite bias in psych studies, but neuroscience has lots of issues too. See fmri study that can find results on a dead salmon. Also inability of neuroscience tools to figure out what the net of a tiny microprocessor does, let alone anything larger.
What I said was that psychology should be used as an extension of interpreting neuroscientific findings.
We haven't learned anything about the brain through psychology we have learned about how humans interpret the brain but we don't have a lot of raw data. That's what neuroscience gives us and so it allow us to focus on interpreting the data rather than making a lot of flawed experiments which are often anecdotal more than anything else.
Again I am pretty sure psychology will be seen as alchemy in 50 years if not before.
Just because psychology doesn't tell us about how the brain functions doesn't mean it isn't useful. Learning about how humans behave in practice is arguably more useful.
And yes tons of psych studies are flawed but the reason we have the replication crisis is that we can actually run sufficiently large well-controlled studies that we have some confidence in the results, and when we run those we do learn important things like "that previous study was totally wrong". A lot of things like our knowledge of heuristics and biases (not the scientific ones mentioned earlier but cognitive ones) and our solid psychometric measures have come from psych studies.
I do agree with you that we basically shouldn't believe the majority of psych studies, especially if they are in the news. However, the crappy studies help guide where we bother to do the large studies, and provide fodder for the meta-analyses. That's what we learn from.
I never said it wasnt useful what i said is that its not the best method anymore and thus shouldn't be a separate faculty but instead be tied to neuroscience exactly to better utilize the raw data which neuroscience gives us.
Not really sure why this is such a controversial suggestion.
It's seen as an overly medical response ("Take your meds, you'll be fine") to a thing that has a mix of biological, psychological, and social causes. Treatments should include the range of bio-psycho-social options, even for the more biological illnesses.
We are talking research here there is nothing in my model that hinders anything all it does is realign psychology with were it belongs in a scientific context, otherwise we should still have alchemy too.
> That way the fields are put to good use rather than being self servant.
I am a student and the way I see is that this is the writing on the wall. The university (and BSc in specific) gives you the basis to develop yourself into the direction you feel passionate about. You can take computer science and mathematics if you want to become a great software engineer or you can mix computer science with communications and psychology to develop yourself into more of a bridge-builder between industries rather than a hard skilled engineer. Whether people figure this out in the midst of alcohol-centric student life and handling odd jobs is another thing.
> I think we need to rethink what we consider academically important which should be taught at universities and then what should be taught as self studies.
In the Nordic welfare state I live, the three main objectives of universities are written in the law (advance research and education, give education based on research, educate students to serve the country and humankind). However, there are many points on what people see valuable and what they do not. Where I live in, the laymen think the mission of higher education is to produce capable and internationally competitive professionals. In Finland in specific, we had Nokia, thus many people see that universities should be teaching students to start new Nokias. The contradiction is that professors in universities are oftentimes so closed in on the academic circles that they start to think the only thing which matters is how competitive researchers the university produces. That tends to fall into the trap laid in the OPs post. I once read a paper from a professor of mine which had the outcome of this:
> In today’s heavily competitive game market, it has become very important to make a game that stands out from the other games. Even though the game design might not be very unique, a game can still offer a better playing experience by having a better usability than a similar game with worse usability.
I believe that when you need a 20-page whitepaper to prove this something is clearly wrong in the way the researchers are incentivized.
Exactly. The field of psychology or philosophy has much more value in the sense that it provides alternate perspectives on another subject matter when applied to them.
As I said I do believe ex. philosophy is important, just not as a field onto itself. It's the way of thinking by the best of them thats valuable (careful thinkers, making sure we don't do circular reasoning, be aware of a priori thinking etc) but as a field it doesn't add more to human knowledge besides a lot of junk because of the demand to publish or vanish.
As a professor, I probably agree that a lot of academia probably isn't that valuable, and probably could be reorganized.
However:
1. I think the same arguments could be made about just about anything anywhere. People forget that.
2. I think there's a collision of reality with assumptions about how progress works. We tend to think that supergeniuses come in and revolutionize everything, and I think things are instead very piecemeal and incremental. As a result, we fret about worthless academics when the real problem is our expectations about how things work. I realize that sounds a bit like "we should downgrade our expectations" but what I really mean is that great work requires a lot of small contributions that don't look important at any given point in time, but we expect great work to be lots of big contributions all the time. It's sort of like expecting building construction to happen by waiting for one mutant superhuman to come in and put the pieces together, and then blaming the workers for not building quickly enough.
3. Relatedly, it's impossible to know what's important and what not at any given time. I'm struck by the number of times obscure results from decades ago are dragged out and used as the basis of some new finding.
4. About psychology and neuroscience: there's the bootstrapping problem. Let's say you want to explain something like attention. At some point you have to invoke neurons. But to get there you have to have a way of measuring attention, a definition of what attention even means. To do that you have to have a model of the thing you're explaining. We're nowhere near a clear framework for that sort of thing. Similar sorts of arguments could be made about philosophy. Sure, philosophy's great when it intersects with other fields. But there are some problems that wouldn't have any sort of outlet anywhere else other than philosophy publications. Emergence is a good example of this.
You cant say that about every other field though. You cant say it about physics you cant say it about chemistry you cant say it about math. Each of the have very applicable outputs.
None of your examples require philosophy or psychology etc. to provide progress. Neuroscience is perfectly capable to ponder on its own problems.
Emergence belongs to other fields too though systems theory, art, ai. And it isn't going to change philosophy itself like ex modernism or postmodernism did.
Keep in mind I didn't say the fields weren't important, the point is that they are not uncovering fundamental properties of their own field. They cant because the tools they uses aren't allowing them to really dig deeper.
Anyway I am not 100% certain about this but there is something that isn't sitting right IMO.
You're assuming that the point of academia is that it is valuable to society.
The point of academia (from a purist's perspective, at least) is that learning is the target in itself; pushing back the boundaries of ignorance is an end-goal of our species.
In many cases the advancement of our state of knowledge brings many rewards but that doesn't have to be the point of it.
> Maybe I am wrong but after postmodernism I really don't see what new revelations are going to be of fundamental value to society.
I'm not sure postmodernism has any fundamental value to society! But of course we can't reason about how much value new advances have to society - if we knew enough about them to know that, they would already have been discovered.
Well not really. There is a reason why learning into itself have value and thats exactly because it benefit society which is why governments have paid for it historically.
But if you want to study alchemy you have to do it on your own. My point is that perhaps philosophy and psychology and economics now belongs to that group.
Well yes really. You might disagree but I think a society that knows what Pluto looks like (for example) is a better society than one that doesn't, even if that knowledge does not make that civilisation any materially better off.
I'm currently a lecturer at a small midwestern liberal arts university. I don't know what kind of job I will have next year because the Dean decided that there will no longer be one year full time contracts and they won't add any more three year contracts. Theoretically they want to have more tenure track job but in practice our department isn't getting any more.
So at worst I will be part time next year which means I won't have insurance. Best case one of the job applications I have pans out and I get a tenure track job.
The reason for the change is, of course, money. It is cheaper to hire 5 part time people to teach the load of 3 full time people because the part timers get less pay and no benefits. Of course finding 2 extra people who are qualified doesn't seem to be something they are worried about...
Strangely, they haven't suggested switching to part-time administrators...
My take on the desire to regulate/manage is that some people would rather have an economy of perverse incentives than tolerate the inevitable abuses caused by lack of accountability. Management in the programming world feels the same way. At first, to many of us, this seems foolish and counter-productive, but then think: how does a university policy maker or a manager explain the rotten eggs (low output academics, unproductive developers) to their "boss," which might be the tax paying public or a budget strapped firm? How does a competitive university distinguish itself in a communicable/verifiable way? Could I be fired, or in the private sector sued for letting smart people fuck up unsupervised? It is difficult to trust others when the worst case outcome exposes us to catastrophic consequences.
I personally agree with the author, but I think there's serious work that could be done to mitigate or remove the incentives to manage, and I don't think those are as thin as "tell people to stop trying to control what they can't"
This article, and the follow-up on it (link is in the article) is about something very important, and should be more widely known about and discussed by anyone with an interest in science, research, and funding for it all.
It points (yet again - as we've all seen the various p-hacking commentary) to the very real problems in scientific research that seemingly threaten to undermine it - especially from the public's point of view.
It has been said that "religion poisons everything" - there is probably a corollary of "politics poisons everything" as well; both of these are really just social control and exhibition proxies. Ultimately, human nature rears its head, and the want for more money, power, and prestige (at all levels) leads to these results.
...and society becomes poorer for it.
The questions and the proposed ideas aiming at solutions to the problems discussed sound plausible (or workable) on the surface, but I tend to wonder if any and all solutions will just be able to be gamed anyhow?
At any rate, I'm glad that this was posted, though I despair at it leading to any solution, as it seems the problems lie within our psychology and society, and are thus nearly intractable.
In the most non-trolling way that I can manage, why can this not be generalized to the economy?
Why is academia a special case scenario where bureaucracy and regulations create unintended consequences and cause people to expend effort to get around them, but we can't say the same thing about the economy as a whole?
Why do we believe that people can't even manage PhD's at a university, but we can take the same people, put them in government, and expect them to, for example, manage the monetary supply or see bubbles in advance and create proper policy to avoid them?
I think one of the hard things about academia (by ways of "pure research"), is measuring impacts. In some positions there are metrics which are more representative of real progress than others. My belief is that measuring an academic's "impact" is a very fuzzy thing in the long term, especially when you bring in people who don't specialize in the same field. That's a reason this can't be generalized fully, though you can probably find corollaries.
But, your last sentence to me, seems like a good one. Presumably, that would be an excellent topic to pursue research in, considering we clearly don't have a working theory on it (but that's not my specialty so who knows, maybe there's a bunch of smart people being ignored about it). Thus, it's hard to measure progress in, and we institute perverse incentives per the article.
reply