False claims in a widely-cited paper (statmodeling.stat.columbia.edu)

by qsi 169 comments 341 points
Read article View on HN

169 comments

[−] anshulbasia27 51d ago
The correction policy is the tell. If your journal's correction process requires the person who was wrong to initiate it, you haven't built a correction processs you've built a complaint resolution process that defaults to 'no complaint, no problem.' Medicine figured this out the hard way after thalidomide. Somehow management academia looked at that history and decided it didn't apply to them...
[−] gus_massa 51d ago
This is caused by a misunderstanding of what a journal is. It's just a curated publication, not the ultimate source of truth.

Nobody should go and put a "retracted" stamp over "Principia Mathematica", or the "Special Relativity" paper of Einstein. Both are wrong, we know.

In this cases cases, you may continue citing them or using them as an approximation. In some other cases they are slowly forgotten and fade away. It's impossible that the author and editors keep reading and answering the complains, that may be sound or from crackpots.

Most research extends previous results that are cited, and if the previous results are wrong you can not extend them, so you don't cite them. If there is a bad paper, it will not be cited after a while.

In this case, what is worrying is that people continue to cite it and that people is using the journals as a magic infalible source.

Some people may write a "comment" that is a short paper in the same or another journal explaining what is wrong. It has an independent review, so the original author/reviewer/editors don't have to agree. The authors (or someone else) may write a "comment about the comment", but it's rare and at some point it becomes a slow reimplementation of Reddit.

From the article:

>> They did allow me to submit a comment for review, since they judged the authors non-responsive, but it must go through a lengthy review process.

[−] graemep 51d ago
There is a big difference between something that just turns out to be wrong, and something that is dishonestly or negligently wrong.

For example, AFAIK, Wakefield's paper claiming MMR causes autism was eventually retracted by the editors of The Lancet.

[−] chermi 50d ago
"Nobody should go and put a "retracted" stamp over "Principia Mathematica", or the "Special Relativity" paper of Einstein. Both are wrong, we know."

What does that have to do with this situation? I'm honestly trying to figure out your chain of thought. Do you think the future should have an impact on the present somehow? The fraud in the op post happened at the time of publication. Oh and btw. No fraud in the two you cited. Unless you figured out how to apply future to present. In which case they probably would've published much better papers, somehow.

[−] gus_massa 50d ago
What about:

https://en.wikipedia.org/wiki/Gregor_Mendel#Mendelian_parado... Apparently the numbers of the second generation are too good to be true.

https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan's... Apparently the viscosity was wrong, and then everyone else made corrections to get a similar result.

> What does that have to do with this situation?

The problem is how to document error without overwhelming honest authors. Imagine a nightmare with a DMCA like process, where anyone can can fill a retraction request and the authors have a week to reply. [The data is in an obscure folder in a notebook that is dead since 5 years. Most of the processing was done by a guy that is now working in the industry for x10 salary.] [Assuming you didn't work with mice, and you must resurrect them to fill the additional data asked in the retraction request.]

An alternative is let the editors ask a new reviewer to make the decision, but everyone has horror stories of reviewers that made bad reviews in spite the manuscript was correct. Then what? Ask the authors again to defend the paper?

The current method is that anybody can publish a "comment" if they find a journal that agree to publish it.

[−] jmalicki 50d ago
Regardless, published papers aren't an authoritative source of truth. Just a note to your friends "hey I did some cool stuff I want to tell you about!"

Sure it's slightly more reviewed than a GitHub repo, but it's not an end all be all.

[−] jltsiren 51d ago
The policy is what you would expect from a journal that is effectively run by volunteers. While the publisher has paid employees, the editorial board in charge of the journal itself seems to consist of volunteers.

When you have a volunteer organization, the impact on people's personal lives is one of the main factors driving decisions. You try to avoid getting involved in somebody else's controversies, as the impact is almost always negative.

From that perspective, the policy seems clear. The authors are responsible for their papers. If someone else claims that a paper should be corrected, they are free to write a paper of their own. That way no volunteer has to take responsibility for someone else's claims.

[−] pas 51d ago
They could at least send the paper with the reported problems out to a new set of referees.

And just as they decided to take responsibility for publishing, they can take responsibility after a similar review for retraction (or issuing an errata or whatever fancy way they want to signal the result of the process).

[−] chithanh 51d ago

> Medicine figured this out the hard way after thalidomide.

Medicine never figured this out. The medical community put Semmelweis in a lunatic asylum, because physicians' ego could not accept the fact that their unclean hands were causing harm to patients. Semmelweis' modern peers continue to let millions of patients die preventable deaths due to errors in medical decisionmaking, and ego plus institutional inertia prevents serious measures against it (most notably fatigue management).

Academia is not any better though. There was the recent high-profile retraction of a publication on opioid exposure via human breastmilk which was widely cited and the basis for many child custody decisions: https://retractionwatch.com/2026/03/03/canadian-pediatric-so...

[−] fergie 51d ago
As somebody who has spent a bit of time in academia, I have often been slightly alarmed by some of the research (and opinon) that comes out of business schools. One thing is that it is often unsubstantiated and just plain wrong, another is that it often seems like the authors kind of know it, almost as if they are intentionally pandering to a lowbrow/midwit audience, and they expect everybody else to be in on the game. Its mystifying.
[−] bradley13 51d ago
Some years ago, my institution (primarily a teaching college) decided it needed an additional accreditation. The organization they went with requires faculty to publish. Including our undergrad business faculty.

We all know that "publish or perish" is stupid. The premier example of Goodhart's Law: “When a measure becomes a target, it ceases to be a good measure.” Why can't our highly paid administration understand this?

[−] thayne 51d ago

> their policies allow only authors to request corrections

Say what now?

So the only way to get a correction for a paper is if the author is willing to publicly admit they messed up? Something that an unethical researcher is very unlikely to do.

[−] Analemma_ 51d ago
The consequences here don’t seem all that bad, it’s just a silly management fad. By contrast, “Growth in a Time of Debt” from Reinhardt and Rogoff steered multiple national governments into pointless self-destructive and immiserating austerity, despite being equally bunk, and none of the authors ever saw any consequences for that either. You can’t even blame that one on “management science”, it was a straight macroeconomics paper.

There’s no accountability for junk science, especially if it props up the political status quo.

[−] paulpauper 51d ago
Peer review is a joke still and exists now to please deans (for hiring and promotion) and enrich publishers. Bad papers get published if it reaffirms the biases of editors, and actually good and original stuff gets rejected. Rather than facilitating the exchange of knowledge, it acts as a barrier, especially when it cannot even be relied on for quality control.
[−] mgkuhn 51d ago
I thought the proper way to correct questionable results is to conduct and publish a follow-up study that independently looks at the same question with better data and better methodology. And wait until multiple independent teams have done the same. And then write a meta-analysis on the emerging pool of independent papers.

That's how scientific consent is normally formed, at least in rigorous disciplines like experimental physics or medicine. A single paper in the end is going to be just a single data point in any such meta-analysis study.

[−] ernsheong 51d ago
I'm very confused because there are 2 Andrews, the author in the blog post only states "Andrew", and by the list of Authors the author seems to be Andrew Gelman, but the slug in the first link is "aking", and then there is also Andrew King, lol.
[−] ls612 51d ago
Management Science, how am I not surprised? They have the worst rep of any Econ/Econ adjacent field for good reason.
[−] throw310822 51d ago
There's a simple rule of thumb that seems obvious to me but is widely ignored: we should be highly skeptical of any finding that claims an agreements between facts and ethical values- or between what is and what we think should be. Reality is absolutely orthogonal to our values, which makes any coincidence between the two extremely suspicious.
[−] zx8080 51d ago
Hey, don't take kids joy! The paper was cited thousands of times, lots of uni students built their early career using it!
[−] t0lo 51d ago
So we're firmly in the era of few people caring about few things now aren't we.
[−] pjdesno 51d ago
Are there any factual allegations on that page? All I could find was "the method described in the paper is not the method the authors actually used", without any elaboration.

I'll add that the reaction of most of academia will be "It's in a management journal - of course it's nonsense."

[−] zx8080 51d ago
Somewhat unrelated but relevant thought: from software engineering experience in large orgs, correction of any issue rarely worth any effort. AI will drive commiting more and more papers with less and less review. The review takes effort, too much in the age of easy generation.

With this, science will probably lose trust even more in the coming years.

[−] banana_sandwich 51d ago
“Professionals” in traffic engineering still religiously cling to “standards” that are largely based on BS served up by auto companies pre 1940.

Many such cases of this, it seems.

[−] froh 51d ago
oh how I'd love to see a gitlab GitHub like infrastructure and culture for scientific publication. let them have the repo private/authors and reviewers only until publication.

but all flaws are issues, later reported issues are right next to the paper, heck there could even badges for publication and review status...

a woman may dream...

[−] ANarrativeApe 51d ago
Stop buying from/submitting to discredited publishers.
[−] Asooka 51d ago
What does it matter that the claims are "false" if claiming them as the truth results in encouraging the society we wish to exist? That paper is a cornerstone of sustainability initiatives, if you retract it, you might as well set the whole Earth on fire. To hell with integrity, I say, it's time to do some good for the world!
[−] altairprime 51d ago
Previously on HN, the referenced paper:

https://news.ycombinator.com/item?id=46752151

(2 months ago, 374 comments)

[−] f30e3dfed1c9 51d ago
Two out of three authors from Harvard Business School. The place is practically a horseshit factory.
[−] Beestie 51d ago
Stopped reading not long after noticing the title of the paper in question.

The very hypothesis is laughable. It is completely irrelevant if the hypothesis is supported or not.

That paper is like flypaper for anyone seeking affirmation of sustainability policies.

I could write a paper tomorrow claiming that [insert conspiracy theory here] is absolutely true and why Big [insert hated industry here] doesn't want you to know the truth and it would be cited until the earth crashes into the sun.

It's not about the truth anymore. It's about opinion validation.

I could write a paper about that but wouldn't hold my breath on getting any cites.

[−] rudderdev 51d ago
Peer reviews need to be more transparent and accountable. Otherwise, we are sure to lose to the misinformation war that is rapidly reaching its peak, thanks but no thanks to AI.
[−] arjie 51d ago
There's this 'criterion of embarrassment' / 'cui bono' sort of standard[0] that really helps judge these things. So many people perform science that seems to always confirm the positions they've held. All the "society is terrible today" people like to quote LendingClub's "70% of Americans live paycheck to paycheck" without knowing it's LendingClub content marketing. The snail darter guy happened to find a novel species that is endangered and genetically identical to a non-endangered one just in the place where he was trying to get a dam banned. The sustainability guys find that companies that focus on sustainability do better. The diversity guys find that companies that focus on diversity do better. A scientist who gets a grant from Philip Morris finds that cigarettes aren't bad for you.

It reminds me of something my dad said while watching Generation Kill - a TV show adapted from the written work of an embedded journalist in Iraq. The show, made by Americans, depicts the US armed forces as ramified through with bumbling fools seeking glory with a few competent people in there. So we finish watching the series and my dad says "Only the Americans would make a show like this" and it's somewhat[1] true. I think perhaps that being able to create a machine that tells you the truth is crucial to success and I feel that the US's peak period as unipolar hegemon (Gulf War I to the end of Obama I) this was more the case than it is today, though this is more of a feeling than anything I have verified.

It also reminds me of an old sort of censorship, one which George Orwell talks about in regards to Animal Farm[2] - a book that was criticized because it perhaps harmed the greater cause of communism. There's too much to quote in his essay because I find the whole thing worthy of reading, but here's one bit:

> Both publicly and privately you were warned that it was ʻnot doneʼ. What you said might possibly be true, but it was ʻinopportuneʼ and played into the hands of this or that reactionary interest.

...

> Is every opinion, however unpopular – however foolish, even – entitled to a hearing? Put it in that form and nearly any English intellectual will feel that he ought to say ʻYesʼ. But give it a concrete shape, and ask, ʻHow about an attack on Stalin? Is that entitled to a hearing?ʼ, and the answer more often than not will be ʻNoʼ. In that case the current orthodoxy happens to be challenged, and so the principle of free speech lapses.

There is even today an orthodoxy of sorts and if you were to contradict it, it is considered sinful to say so. I'm Indian so perhaps it is safe for me to use this as a race of choice but what if it were found that Indians actually are less smart than, say, White people. Could such a thing be published if it were true? People often say "what are you going to do with that information?" and somehow I don't share that view that all science must necessarily immediately deliver applied benefit. Knowing is good for its own sake. Truth is good for its own sake. Or at least that's what I believe.

I suppose I'll only know through the period of my own life whether this belief is adaptive. Who knows, a present or future power might be one formed entirely through inaccurate data and information[3], and we might be as Orks and painting things red might make them faster because we believe it so in sufficient numbers.

0: Obviously there are limits. Eli Lilly benefits from GLP-1RA drugs working well but they do in fact work well.

1: Others obviously also make fun of themselves, but something like In The Loop parodies specific people more than the whole machine and its participants. Generation Kill feels much more real a depiction of large organizations and their incentive mechanisms - especially how they grind forward and get the outcomes they want despite everything else. Perhaps my least favourite parts were the emotional-breakdown bits at the end, which I've since found out that the participants themselves said were invented for TV.

2: https://www.marxists.org/archive/orwell/1945/preface.htm

3: Open societies like ours have the problem that external misdirection leaks into internal data but perhaps with sufficient computerization we can keep separate truth and propaganda within the structure of government

[−] foweltschmerz 51d ago
disheartening