"Especially if you are already well-established. Publish less, but publish better research. Put time and effort into transparency. Share everything you can share, as openly as you can share it. Use your privileged position to do research in the way you think it ought to be done, even if that’s not the quickest way to achieve academic success. (...) Be aware of the implicit signal you might be giving those you supervise when you say things like ‘you need to get a result’ or ‘we need to make this publishable’."
While I agree in the abstract, the problem is that when you're well-established, in most areas, your research basically amounts to supervising PhD students and postdocs who are not well-established. And they're struggling to meet the requirements to finish their thesis, get a permanent position, etc. So if you encourage them to do slow science and publish less, there's a high risk that you're basically letting them down. Plus, to do research you're probably using some grant funding and guess what the funding agency expects...
Thus, most people never get to a point in their career where they can safely say "let's ignore incentives and just pursue this project slowly and carefully". There might be some exceptions. Probably in math, where research is often individual. And maybe in other areas if you can have a smallish side project with other professors that doesn't require much specific funding, or if you have a student who is finishing and has already secured a position in industry so their stakes aren't high. I've been in those situations sometimes, but it's the exception rather than the rule. The truth is that even senior professors seldom have the luxury of not being heavily pressured by incentives.
I think this is exactly the hard part: individual virtue alone does not solve a system where supervisors, trainees, and funders are all pulled by the same incentives. "Do slower, better science" is not actionable unless the surrounding infrastructure and rewards change too. That is a big part of what we're thinking about with Liberata, especially around peer review and attribution. If relevant, our beta waitlist is open: https://liberata.info/beta-signup
Seems interesting! I think there is a unit missing when asking "when do you anticipate to publish next".. I'm assuming months but would be important to specify
A) How does just another journal solves the main problem, that evaluation is done just counting the amount of papers. Are you giving grants that support a long time between publications?
B) You posted almost the same comment with a link to your project 8 times.
Once they're established, they can decide how many PhD students to take on. And a lot of foreign students who come on J-1 visas and are sponsored by their governments are not under that pressure. A lot of them will get a position in their home country with a lot less publishing pressure than in the US.
The professor can always set his terms, and it's up to a student to have him as an advisor. In both universities I attended, there were professors who were very fussy about how much research they did and how much money they brought in (could be 0), and if a student wanted them as an advisor, they needed to understand the risks involved.
As a faculty member once told me: The primary lever the admin has is salary increases. If you're OK with your salary, then you can ignore it (a number of faculty members in my department at one university stopped doing any research once they got tenure).
It's a lot less pressure than industry once you have tenure.
Plus more generally, contact with peers through publishing is good. It is easy to end up with work that does not really advance the state of the art if you’re not making regular trips to convince others that your work is interesting.
At the same time, knowing someone who committed academic fraud during his PhD and was caught, I can say two things:
A lot of people do it when they simply don't need to. They're not trying to "survive in academia". They're trying to get to the top. The person in question was smart, bright, and did good research (at least excluding the stuff he made up). He could have gotten an academic position without committing fraud. And he could have had a great industry job without it too.
No matter - he simply switched to another top tier university, got his PhD, and is now running a startup. Which comes to the second point: The repercussions are minor even when you do get caught.
This is what makes the problem feel so systemic, in that weak consequences after the fact, and weak incentives for transparency before the fact. If the system mostly rewards output and prestige, then misconduct can remain a high-upside bet. We should be building research infrastructure that makes review trails, contribution, and verification more visible much earlier. That is part of what Liberata is aiming at, if of interest: https://liberata.info/beta-signup
No - It was kept within the team and he was "fired" from the research group. Word got around and all the professors in the department (in the same field) knew (as did their students), so he couldn't just find another professor.
So he switched universities.
But still, didn't he worry that he'd bump into his former professor at a conference and that he would tell his new advisor? I don't know if he made some deal with him ...
That same professor will happily take money from the student's startup to conduct research assuming it is successful and has funds to spare. That should tell you right there how the incentives are aligned.
Here's an important aspect to understand: successful professors don't read papers in full. They're too busy for that. They only take a look at the title, abstract and introduction — and perhaps they will glance at the figures. This is why telling a compelling story is so important.
Thats not true at all. If anything, they will read the figures and skip the introduction.
If it is your field, you don't need an intro, and don't want to hear whatever yarn they are spinning in the abstract/discussion. You jump straight to the figures / table to review the data yourself.
This (also) feels like a core failure mode, in that papers are optimized for skim-level persuasion because the system is too overloaded for deep evaluation at scale. Then a lot of the actual scrutiny gets pushed onto under-credited sub-review labour. Peer review is too important to stay this invisible and under-incentivized. Liberata is exploring exactly that problem, and our beta waitlist is open if you want to follow along: https://liberata.info/beta-signup
I'm not in academia, so I might be fully ignorant about how things operate, but if professors don't reaed the actual paper, can do they know if it's BS or not?
Here's how it works in our group. The professor gives papers to the PhD students or PostDocs, who read the paper completely. I regularly 'sub-review', as it is called, meticulously looking for issues. I have heard that there are professors who review entire papers in 2-3 hours, since they have a lot (10+) of papers per conference to review without any compensation while they have their own research, teaching, and funding to juggle.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
The sub-review process, when it works well, is arguably a reasonable one. To give the example of how this works from the perspective of the program committee of a conference I'm involved in:
The PC chairs assign papers to members of the PC. Those reviewers are ultimately responsible for the review quality and, a more frequent problem for the conference, ensuring the reviews are in on time. In principle, they can ask anyone to sub-review, but in practice, it usually goes to grad students, postdocs, or graduate alumni (and since we have a relatively light review load per member, we have many people who do all reviews themselves). The reviewers arguably know more about the expertise of their grad students and postdocs than the chairs doing the assignments do. Also unlike a journal, where editors might ask anyone with particular expertise, we both only assign reviews to PC members, and do assign them: PC members only get to state their preferences on what they would like to review. The sub-review process ideally lets reviewers ask people to do reviews who they know would be suited to a particular paper, but who might not be experienced enough to reasonably be on the PC itself with those responsibilities, and the chairs might not know much about. It then lets those reviewers look over the sub-reviewer's work directly, which might include mentoring them. While we do anonymous reviews, identities are visible to chairs, and one thing I've noticed when a chair, for example, is that grad student sub-reviewers often do excellent, thorough reviews, but also often lack the confidence to be sufficiently critical when writing about problems and weaknesses they identify, something that the reviewer can help with.
The review system (we use easychair) directly handles sub-reviewers, and our proceedings list all sub-reviewers (at least, those who actually submitted reviews). Good sub-reviewers can sometimes be reasonable candidates to ask to be on the PC the next year, and give a gentler, safer onramp: we're able to have a wider mix of junior and senior members when there are new postdocs (and I think in one case a grad student) who we already know do reliably good reviews and know our review process.
This feels like a core failure mode: papers are optimized for skim-level persuasion because the system is too overloaded for deep evaluation at scale. Then a lot of the actual scrutiny gets pushed onto under-credited sub-review labour. Peer review is too important to stay this invisible and under-incentivized. Liberata is exploring exactly that problem, and our beta waitlist is open if you want to follow along: https://liberata.info/beta-signup
A few other commenters have talked about the paper review process.
I wasn't thinking of this at all. Important to understand: the peer review process takes up only a minor part of a professor's mindshare. It's considered a chore. Much more important is to read lots of new papers (including pre-prints) for continual education, to know what's going on in your field and adjacent fields.
Reminds me of the old saying 'the purpose of a system is what it does'. Academia has been this way for a long time, and many have written about these problems for just as long, and yet here we are still with the same system that incentivizes fraud (whether that is made-up data, self-planarization to up the publication metrics, or both).
It makes me wonder which group would lose out if this system was somehow fixed. Is it just that managers and grant authorities would have to work a whole lot harder to evaluate a researchers merit? Is that all that's holding us to the current system?
Academia is no different from any other profession or sport. Holding it to a higher bar than say, medicine, engineering, law or accounting, doesn't make sense.
As an example, let's take soccer: All players will tackle if they think they can get away with it. Even Messi, Ronaldo, Mbappe do it. Those who are caught receive a red card and are sent off the field. Do red cards stop tackles? No. Players just try hard not to get caught.
I understand this is a cheeky section heading and the author is not really making this point, but this may be one of the dumbest popular phrases out there. You're effectively saying "Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person." "The game," of course, is made up of players and if no one played that way there would be no game.
One thing I noticed on the CS PhD side of the house is because many researchers don't want others to easily build upon their work (for whatever reasons), they don't often release the source code/data required to quickly validate it. This is a recipe for shortcuts, errors, and even in the worst cases, fraud.
Lots of words that boil down to a 2500 year old mathematical formula, 天下之所惡唯孤寡不穀而王公以自名也, which in English translates as something like, Society's only problems are performative victimhood, colonization of the moral virtue of the vulnerable and oppressed, and mandatory penance rituals, especially when presidents and professors make it their job.
52 comments
While I agree in the abstract, the problem is that when you're well-established, in most areas, your research basically amounts to supervising PhD students and postdocs who are not well-established. And they're struggling to meet the requirements to finish their thesis, get a permanent position, etc. So if you encourage them to do slow science and publish less, there's a high risk that you're basically letting them down. Plus, to do research you're probably using some grant funding and guess what the funding agency expects...
Thus, most people never get to a point in their career where they can safely say "let's ignore incentives and just pursue this project slowly and carefully". There might be some exceptions. Probably in math, where research is often individual. And maybe in other areas if you can have a smallish side project with other professors that doesn't require much specific funding, or if you have a student who is finishing and has already secured a position in industry so their stakes aren't high. I've been in those situations sometimes, but it's the exception rather than the rule. The truth is that even senior professors seldom have the luxury of not being heavily pressured by incentives.
B) You posted almost the same comment with a link to your project 8 times.
The professor can always set his terms, and it's up to a student to have him as an advisor. In both universities I attended, there were professors who were very fussy about how much research they did and how much money they brought in (could be 0), and if a student wanted them as an advisor, they needed to understand the risks involved.
It's a lot less pressure than industry once you have tenure.
At the same time, knowing someone who committed academic fraud during his PhD and was caught, I can say two things:
A lot of people do it when they simply don't need to. They're not trying to "survive in academia". They're trying to get to the top. The person in question was smart, bright, and did good research (at least excluding the stuff he made up). He could have gotten an academic position without committing fraud. And he could have had a great industry job without it too.
No matter - he simply switched to another top tier university, got his PhD, and is now running a startup. Which comes to the second point: The repercussions are minor even when you do get caught.
> and was caught
Was it made public?
So he switched universities.
But still, didn't he worry that he'd bump into his former professor at a conference and that he would tell his new advisor? I don't know if he made some deal with him ...
If it is your field, you don't need an intro, and don't want to hear whatever yarn they are spinning in the abstract/discussion. You jump straight to the figures / table to review the data yourself.
It's not a pretty system sometimes.
Edited to add: Conference's also require declaring that there was someone who sub-reviewed the paper. The professor / PI mentions the PhD student's name in the review form of the paper. Of course, the professor also double-checks all the sub-reviews
The PC chairs assign papers to members of the PC. Those reviewers are ultimately responsible for the review quality and, a more frequent problem for the conference, ensuring the reviews are in on time. In principle, they can ask anyone to sub-review, but in practice, it usually goes to grad students, postdocs, or graduate alumni (and since we have a relatively light review load per member, we have many people who do all reviews themselves). The reviewers arguably know more about the expertise of their grad students and postdocs than the chairs doing the assignments do. Also unlike a journal, where editors might ask anyone with particular expertise, we both only assign reviews to PC members, and do assign them: PC members only get to state their preferences on what they would like to review. The sub-review process ideally lets reviewers ask people to do reviews who they know would be suited to a particular paper, but who might not be experienced enough to reasonably be on the PC itself with those responsibilities, and the chairs might not know much about. It then lets those reviewers look over the sub-reviewer's work directly, which might include mentoring them. While we do anonymous reviews, identities are visible to chairs, and one thing I've noticed when a chair, for example, is that grad student sub-reviewers often do excellent, thorough reviews, but also often lack the confidence to be sufficiently critical when writing about problems and weaknesses they identify, something that the reviewer can help with.
The review system (we use easychair) directly handles sub-reviewers, and our proceedings list all sub-reviewers (at least, those who actually submitted reviews). Good sub-reviewers can sometimes be reasonable candidates to ask to be on the PC the next year, and give a gentler, safer onramp: we're able to have a wider mix of junior and senior members when there are new postdocs (and I think in one case a grad student) who we already know do reliably good reviews and know our review process.
I wasn't thinking of this at all. Important to understand: the peer review process takes up only a minor part of a professor's mindshare. It's considered a chore. Much more important is to read lots of new papers (including pre-prints) for continual education, to know what's going on in your field and adjacent fields.
It makes me wonder which group would lose out if this system was somehow fixed. Is it just that managers and grant authorities would have to work a whole lot harder to evaluate a researchers merit? Is that all that's holding us to the current system?
As an example, let's take soccer: All players will tackle if they think they can get away with it. Even Messi, Ronaldo, Mbappe do it. Those who are caught receive a red card and are sent off the field. Do red cards stop tackles? No. Players just try hard not to get caught.
>Don’t hate the player, hate the game
I understand this is a cheeky section heading and the author is not really making this point, but this may be one of the dumbest popular phrases out there. You're effectively saying "Don't get upset at me for being an awful person, I probably wouldn't have succeeded if I'd been a good person." "The game," of course, is made up of players and if no one played that way there would be no game.