Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What happens when scientists admit error (elemental.medium.com)
221 points by lelf on April 25, 2020 | hide | past | favorite | 164 comments


The greatest and in some sense heartbreaking admission of error I know of, is Frege's letter to Bertrand Russell:

https://sites.tufts.edu/histmath/files/2015/11/Frege-Letter-...

Russell himself says "As I think about acts of integrity and grace, there is nothing to compare with Frege's dedication for truth."


This is beautiful and humbling. An example of a scientist who loved being wrong was Fred Hoyle. Quote: ‘it is better to be interesting and wrong than boring and right’.


> ‘it is better to be interesting and wrong than boring and right’

This is why fake news is such a hit ;)


[flagged]


Has everybody here forgotten that there was a serious discussion of using UV light on humans, as well as how to reduce the viral load internally on this very site?


Medical advice should come from medical professionals.

UVB phototherapy saved my life. Involving a ridiculous amount of planning, effort, oversight, followup. No one is more pro UV than me. (eg I asked about using non visible light as a diagnostic tool when people started playing around with UV photography.) When there are novel UV therapies, I want to hear about it from my doctors, not the POTUS.

Especially at the expense of concrete things needing to be done ASAP, like ramping up testing and contact tracing.


A press conference does not come at the expense of doing concrete things. It has a purpose. One of those purposes is to keep people's spirits up with hopeful news about things that are being tried, to show that all angles are being pursued, etc. You can argue all you want that you know better how to do a press conference as a sitting president of the USA, but it's absurd to fault him for giving medical advice. He didn't.

You might as well condemn all newspapers, websites, or newscasters, and commenters the moment they start talking about potential new areas of research or treatment.

The reason the media got all pissy about it is because the President started doing what they consider to be their own proprietary job that nobody else can do: spreading rumors (aka "news").


Is there any reason to not pass the mic to doctors?

"Hi all. Thanks for coming. We've been kicking around some ideas. You all know Dr. Fauci. I've asked him to share what we now know, what we're thinking. He'll also answer all your questions. Please give him your full attention."


Well, there are reasons, but I can't think of good ones. But you do realize that's quite a different criticism, right?

I'm just saying that the hypocritical hyperventilating about this is absurd.


UV light that has a high enough frequency to kill a virus is also cancer causing. Most sci-fi authors I read come up with more plausible tech solutions than the Trumpster Fire, and their job is literally to make up lies!


Do you mean the 222nm (far UVC) thing? (Which seems safe.) Or am I missing something? Or something dumb that Trump said?


Reflecting on that thread, this quote springs to mind

“A fool can throw a stone in a pond that 100 wise men can not get out”

Not sure we even ought to try.


fine, but no one suggested injecting bleach to a national audience did they.


No, they didn't. Literally, nobody.


Who is it that really bears the blame for these unsubstantiated bleach drinking incidents? Trump, who did not say anything like "drink bleach", or the media who openly lie and run headlines like "Trump tells everyone to drink bleach!"? Or even non-media hucksters trying to get publicity by tweeting things like "I can’t believe I have to say this, but please don’t drink bleach"?


He should have said nothing at all, but left it to his department heads to deliver the message. Oh! He's gagged his department heads, or fired them. Well.

It all lands on his head. What else does being the President mean? He's screwed up the message royally, because he's an ordinary dufus who likes to hear himself talk, operating way out of his depth. That's an honest appraisal, given the avalanche of examples.


None of that is relevant to my post. A major symptom of Trump derangement syndrome is the inability to understand that criticizing the bad behavior of the press is not a defense of Trump. No matter how stupid Trump is, it in no way excuses the media intentionally lying about what he says.


Only seems like 'lying about Trump'. More like, mocking him for saying awkward confused things.


What he said is just his usual incoherent gibberish. If the media didn't run an organized campaign of bleach drinking hysteria, absolutely nobody would have that idea in their heads. Now the media has created a situation where a small group of paranoid, low IQ people will think Trump actually did tell them to drink bleach, and that CNN is just trying to stop them from trying this great cure. The problem here is the media.


> What he said is just his usual incoherent gibberish

IF he actually says "incoherent gibberish", how can you definitely say he was not suggesting to drink bleach?

You're claiming that any attempt to clarify the incoherence is 'lying', but if it's truly incoherent, can you tell what the actual intent was? Is it just after the fact self-assessments of something being "a joke" or "sarcastic" that serve to clarify?


>how can you definitely say he was not suggesting to drink bleach?

I didn't. I said he did not say to drink bleach. If you want to believe that those words "suggest" drinking bleach, that is an entirely different argument.

>You're claiming that any attempt to clarify the incoherence is 'lying',

They are not trying to clarify. Read the transcript, a reported specifically did ask for clarification, and was given it. So no, there is no question he did not say to drink bleach, every reporter there knows this because of the one reporter that actually asked for clarification. They are not trying to clarify, they are trying to misrepresent. That's called lying.


The problem here is that there is a conspiracy-theory based alternative medicine "treatment" of using bleach to "cure autism", and those bullshit theories get amplified by someone with huge influence and media presence.

https://en.wikipedia.org/wiki/Miracle_Mineral_Supplement


It is being amplified by the media itself, not someone with media presence.


Now that is just doubletalk. He said it. The media is mocking it.

If folks are doing stupid stuff to spite 'the media', then that's just Darwin at work.


He didn't say it, that's the point.


Go listen again? Geez.


Read the transcript. He did not. And a reporter specifically asked for clarification, and got it. There was absolutely no confusion, this is deliberate misrepresentation by "journalists" who value clickbait over truth.


> who did not say anything like "drink bleach"

"We're preparing to launch a submarine offensive".

"We need to do more to address homelessness".

"I'm having a team investigate infrastructure spending bills".

Those sorts of quotes are not anything like "drink bleach".

Within the span of a few seconds, Trump strings together "disinfectant" and "injection" and "sounds interesting". We've already seen someone drink chloroquine phosphate, after days of Trump mentioning hydroxychloroquine multiple times, with lines like "If it were me, in fact, I might do it anyway. I may take it" and "what have you got to lose?"

Did Trump specifically say the words "Drink any form of chloroquine on your own?" No. In the midst of a global health pandemic, he keeps throwing a bunch of words together which don't make much sense, and can easily influence people to make bad decisions. His influence in this matter has already been demonstrated.

A danger of multiple people saying "don't do it" is... a certain portion of people will read that as a sign that "they" don't want you to get healthy, and that Trump is "on to something", and they may do this sort of thing anyway.


>Within the span of a few seconds, Trump strings together "disinfectant" and "injection" and "sounds interesting".

So, you are fully and entirely aware that he said nothing of the sort, so much so you afraid to actually quote him and just want to pull individual words and say "these words were near each other!". And none of those words are drink or bleach. This supports my point.

>We've already seen someone drink chloroquine phosphate, after days of Trump mentioning hydroxychloroquine multiple times

And when the woman who poisoned her husband with fish tank cleaner is convicted, is that going to change anything in your mind? Or will you simply dismiss reality as inconvenient and cling to your media created fantasy?


I upvoted because I think rational disagreement is important. Let's just go through the facts we have. Below is some of the White House transcript, and a link to the rest.

The President did not tell Americans to inject bleach. The President did speculate, at a press briefing, about the possibility of injecting disinfectants into the human body. To me it seemed pretty clear that he was talking about a research idea, not about a home treatment.* The President added the caution that doctors should be involved.

Agreed?

*Personal opinion, I don't believe he was clear about the lethality of this idea if somebody is silly enough to try it at home.

Transcript excerpt: "THE PRESIDENT: Right. And then I see the disinfectant, where it knocks it out in a minute. One minute. And is there a way we can do something like that, by injection inside or almost a cleaning. Because you see it gets in the lungs and it does a tremendous number on the lungs. So it would be interesting to check that. So, that, you’re going to have to use medical doctors with. But it sounds — it sounds interesting to me."

https://www.whitehouse.gov/briefings-statements/remarks-pres...


>To me it seemed clear that he was talking about a research idea, not about a home treatment.

Yes, the preceding paragraph makes that clear.

>On the other hand I don't think he made that point 100% clear.

Definitely not, most of what he said is barely coherent nonsense. So if the media wants to report on that, they should report what he said, or even "we have no idea what he is babbling about". But instead the media openly lies about what he said. We have huge numbers of people right here being outraged that Trump would give people the idea that drinking bleach is a good idea, yet it is the media giving people that idea, not Trump. Our media is not simply failing us, they are actively harmful. It is disgraceful.


I agree that the media should report on facts. If they delude us on purpose then they're helping wreck the country.

Unfortunately that applies to the media on both deranged "sides" of the post-truth landscape. When they misreport, they're wrecking the place. They all do it, and the old excuse "well the other guys are doing it" is the opposite of comforting.


Even if trump didn’t say to drink bleach, his statements in context showed such a complete ignorance of many basic facts that even the average 8th grader would know. It’s impossible to defend what he said so people attack the media response to it instead.


>his statements in context showed such a complete ignorance of many basic facts that even the average 8th grader would know

How does that make the media lying ok? The president being a dumbass does not excuse the media lying, especially not with such dangerous lies.


The media didn’t lie. They interpreted his incoherent nonsense uncharitably.


Many media outlets have claimed Trump told people to drink bleach. Unless you can provide a quote of Trump saying that, yes they are absolutely lying.


Whataboutism is apparently not reserved for China criticism


There is no whataboutism. You are creating a strawman, and then complaining about it being fallacious. I am not defending Trump, I am attacking the media. The media are the ones spreading the bleach drinking meme. That is bad. They should not do that. And all I get are these histrionic "omg u luv blumpf fox watcher!1111" responses, nobody actually addresses the issue. If the media did not run insane "don't listen to Trump, drinking bleach is bad!" stories, nobody would think Trump said to drink bleach, because he did not, and because nobody listens to anything he says in the first place, only to what the media says he says.


You have not provided proof that the media said people should drink bleach (let alone a media frenzi about him saying it) and please you should apply the same standards that you apply to the media to your own claims.


Google for "Trump bleach". The entire first page is media either saying Trump said to drink or inject bleach or lysol, or the media saying "don't listen to Trump, drinking bleach is bad", which is clearly implying Trump said that. This should be a pretty big red flag, since the word bleach does not appear anywhere in Trump's remarks. You even have them doubling down on it in the face of people pointing out they are liars:

https://reason.com/2020/04/24/its-not-fake-news-trump-did-ac...

There is no way for them to pretend they don't know better here. They are outright saying "our lie isn't a lie" and repeating the lie.


Here's a quote from the article you linked. Can you specify what part of it is inaccurate? Or dig around, of course you don't have to use the quote I picked. (The quote below may be unsympathetic to the President, but I'm asking where it is inaccurate.)

" Did the president recommend that Americans inject themselves with bleach as a COVID-19 cure or prophylactic? Strictly speaking, no. As McEnany emphasized, he said "you're going to have to use medical doctors" for that sort of thing. But he did idly speculate that, since disinfectants kill the COVID-19 virus on surfaces, it was worth investigating whether they might work as a treatment, and he specifically mentioned "injection," which was not only scientifically naive but reckless given the prevalence of quack remedies and wacky ideas about how to ward off the disease. "


And what does the headline say?


" It's Not Fake News: Trump Did Actually Suggest That Injecting Bleach Could Be a Cure for COVID-19 "


You can't be a huckster if you aren't selling. They aren't making a profit on anyone's gullibility, they're just horrified.


> Trump makes these dumb quotes just for free publicity

I wouldn't discard that is clearly a smokescreen to give journalists something to write about. It was in the same day as the embarrassing psychological mark of 50.000 US citizens killed were reached (if I'm not wrong)... and everybody is talking about bleach-yes / bleach-not.

If the real purpose was that, it was clearly effective. Mission acomplished; journalists confused and running in circles; lets move on.

The other possibility is that Trump would not be aware that UV rays cause skin cancer, so would cause lung cancer really fast. Seems unlikely to me. Not to mention that skin has melanin. Human lungs are like a norwegian albino in Torrevieja standing in the middle of the street at 15h. No; Trump most probably knows yet that the sunlight can hurt.

I assume also that trump knows yet that bleach burns human mucous tissues. I would have a hard time thinking the opposite.

In terms of deathcount, Coronavirus is turning quickly in another Vietnam for US, when 58k US soldiers were killed.

And is being much worse for Europe (all should be said). We are not laughing about it. Trust me.


I’m surprised there’s still people who think he’s just a calculated puppeteer who has everyone fooled with his act. If it’s an act and he’s actually sharply insightful then I would’ve expected someone in his inner circle, present or former, would attest to the same. But every former cabinet member describes him as “what you see on TV is what you get behind closed doors“.


Puppeteer or puppet reading a script, it does not matter.

But well, maybe is like you say and the republican party is letting him run wild without any cordon sanitaire at sight. I don't really know. ¯\_(ツ)_/¯ I'm just astonished with the current cuckooland of free, like the rest of the world.


Or, he knew most of those things, but has become so senile that he can't help spouting his stream of consciousness during live press briefings. I appreciate your optimism, but even if he was speaking intentional fluff, the approach was childish and poorly informed.


Talkig of Russell, I can't imagine how he and Whitehead felt when the huge edifice of the Principia Mathemetica simply crumbled away under the onslaught of the incompleteness theorem.


I don't think it did. I'm sure they would have liked it if incompleteness didn't hold, but PM was influential in advancing mathematical logic, which is still going strong. What happened is that the system itself was supplanted by first order logic and set theory.


> I’ve never heard of a comparable situation.

Yet it is hard to believe scientists would be the ones who only ship flawless software. And easier to imagine that most often, when one realises a previous publication's result was indeed impacted by an error and faces the difficult decision to retract or pretend they haven't noticed, they opt for the other route. I've found scientists in general to have less ego then average, as expected from people trained to care only for the facts, but they still operate in the larger framework of individualistic modern society. It probably takes a scientist and a woman, whose education generally encourage lesser ego to begin with, to admit such an error.

Or a software author, but there there is no other alternative to guilt admittance than total ridicule, since we are drowning everyday in a world of errors.


> Yet it is hard to believe scientists would be the ones who only ship flawless software.

I mean, most of them don't even pin down their dependencies. I don't want to touch a lot of python code in the open written by scientists.

Thinking about software reproducibility is the last thing I have noticed on repositories submitted with a paper on arxiv and other places of publishing.

Not even a requirements.txt or mentioning what they used.


I've spent years learning and practicing good software engineering practices. Whenever I read about quality issues in scientific code, I wonder: how are they expected to learn these things?


Physicist here. The lack of any formal training in most programs is a problem. In 2020 it's essentially impossible to do research without being able to write and maintain code, yet there are no real standards of knowledge for the dicipline as a whole. I dont know of any undergrad programs requiring physics majors to take even intro to cs. Most incoming grad students have never used version control, and probably about half of professors have never used it, let alone want to start using it. Larger collaborations tend to be more rigorous with coding practices because they have dedicated software engineers who enforce order and show people the way. Small groups with a few students and a professor are basically the wild west as far as writing software, and vary wildly.

In practice most grad students get coding experience by working on projects, which is a great way to learn, but its rare for anyone to look at the code itself afterward if the results are sensible. I dont know of any small group with a formal code review process. I think we could make relatively inexpensive improvements in science by bringing in more dedicated software people who can educate others and help with the software engineering aspects. Its not unusual for a lab with half a dozen groups and 50+ scientists to have no software specialists, and 1 or 2 IT people whose responsibilities are purely infrastructure oriented.


I don't think you need to worry about more than a few principles -

1. Reproducibility

This is very important for scientists so I expect them to know about package management and containers. You don't need to know everything about it.

Virtual environments, docker and using a good package manager like poetry (which is just an abstraction over virtualenv) shouldn't take more than a few weeks. Once you are done with the basics, you can learn more as you build.

2. Git and documentation

Add another few weeks.

They don't need to write good code. They just need to document it which they should already be good at. Optimization, scalability, distribution and other things can be handled by engineers but without documentation, it makes the work a lot harder.


Every engineering team that picked up a code base has at some point had a meeting where someone starts ranting about how the code that they are looking after stinks and it would be better to rewrite it, or move to some alternative.

This seems to be true if the code is the product of one person working for a few weeks or 20,000 people who work for 10 years.

A few years ago I had a conversation where the team insisted to me that the extensive documentation and instructions for a package were worthless because "good code is self documenting" the leader of the pack had shiny eyes and banged the table several times.

Be cautious, be kind and look for value in the code that you are using - not just problems. Also, consider: if you can pick up a code base (written by someone who knows the science) would it not be more profitable to repair, document and improve rather than demanding that the scientists learn both your skills and your way of doing things? If you conclude the latter, would you (hand on heart) be able to empirically demonstrate that your way of doing things is the right way?


I disagree with "They don't need to write good code".

Reproducability is a check that you got the right answer. As you describe it, it's more like a crappy programmer that relies on the testing team to catch his errors rather than be responsible for his own.

> and other things can be handled by engineers

Due to lack of funding and low pay I think most of them don't have access to software domain experts (aka seasoned programmers)


This is the core question. Can you teach good software engineering, or does it come from years of working with more experienced people/making mistakes and learning?

I have met a few "born SREs" who had little or no training but knew exactly what to do in production outage situations. But I haven't met anybody who was a "Great Software Engineer" out of the box.

And scientists aren't going to take a lot of time out of their education to get that day-to-day programming experience. I was lucky- I worked in a theory lab, next to a team of good software developers, and worked/learned from them over a period of years.

I think what we need are more research engineers, who are people who do software engineering for scientists, so the scientists don't have to become software engineering experts. The scientists also need to learn a bit better to ask for the right tools.


> individualistic modern society.

I'm sure you don't really mean that premodern societies were more selfless and humble, but that's how you writing can be interpreted. Family name, fateralism and class based society puts even stronger incentives to skew the the truth than individualism.


I happen to believe that a few generations ago people used to care a bit more about whatever group they though they belong to (family, clan, institution, corporation...) and a bit less about standing out with their own individual journey.

I do not believe this was any better for the truth in general, as indeed in-group/out-group worldview can be a strong incentive to all sorry of lies. Except in those specific cases where one has to pick between truth that's the best interest of the scientific society and falsehood that's the best interest of the ego.


> Yet it is hard to believe scientists would be the ones who only ship flawless software.

The job of a scientist is really not to ship software, that's what a team of engineers would do.


> The job of a scientist is really not to ship software, that's what a team of engineers would do.

I think that this is the real problem - in academia there is this idea that learning good practices is like a 'dirty' thing that is not required, while instead it would speed up the work and make it more reliable. if you look at chemistry or medicine, there researches have good practices for managing the lab and respect them.


> in academia there is this idea that learning good practices is like a 'dirty' thing that is not required

I think you got me wrong. Shipping quality software is not 'dirty' but requires a specialised focus. One can not do everything by yourself - science and engineering are complementary skills. In your example of chemistry, the chemist who designs a molecule does not spend time to ship the molecule to the world.


Except that it wouldn't speed things up at all. Academia writes run-once code, which changes spec fourteen times in one week. Their use case is orthogonal to industry.

Have you considered that maybe the academics actually know what they are doing?


Lol I spent 5 years in academia, and I have a PhD in CS - I know what I'm talking about. Specs of code change in academia as in industry, I was able to write unit tests and document my code also in academia. And I know in medicine and chemistry time to publish are much longer - but that is not connected with the fact that they know how to properly use a microscope, clean the lab, and keep an inventory.

If you don't write unit tests how do you hedge the possibility of having bugs in your code?


Most scientists have no training in computer science, much less engineering, but still need to do it sometimes to build experiments. They've largely taught themselves. You are not the norm.

I've taught dozens of grad students enough programming to get the job done and it would have been a total waste of time to make the code that robust. They need experimental results next week, with only one computer ever expected to run the code, not a product demo.

The software isn't their research project, it's a nuisance that they have to deal with. Accordingly they neither want to nor have time to do it perfectly. I cannot blame them.

That said, there should be a system to encourage actual trained programmers to get involved, including coauthorship and consideration in tenure decisions. The current system is bad, I'm just saying it's not the scientists fault here. This is just literally not their domain of interest or expertise, and I would rather they focus on the thing they're uniquely good at.


I agree - the message I wanted to communicate is the same :) I never thought the problem are the scientists ;)


The authors of HMMER know what they are doing. That's an extremely rare situation.


> if you look at chemistry or medicine, there researches have good practices for managing the lab and respect them.

Their studies / experiments last years.

In CS/ML/Applied Math you sometimes have to write an experiment with a deadline next week. Excuse me if when I'm trying to scramble for a deadline at 3am I don't have my mind toward TDD or I'm not neatly packaging everything in a docker.


Hey, I feel you - and I understand the pressure - i have been in that situation. The point is that this:

> you sometimes have to write an experiment with a deadline next week.

shouldn't happen. And yes, at the moment is like this - sometime you will have to hack. But if the all community start to push for proper practices, instead of just saying "is as it is" - there will be less papers, with more quality.


I think this can be considered a scientific version of blameless postmortem.

There is no shame in making mistakes, and being honest about it should take us further as a society.


The other side of this is that actual science happened. Admitting error is science being done right. The search for the truth is fundamental and detecting when it wasn't found is contributing to the greater truth.

Questions around methodology abound but at its core this is science walking tall. Not hobbling along loaded down with sugar-coated lies about to collapse into a coma. That this is seen as an exception or extraordinary is quite illuminating in itself.


I really hope those grad students that she mentioned didn't get a couple years added on to their degrees as a result of this. I mean good on her for finding the error, but I can't imagine what it would be like to be told "cancel your thesis, I made a mistake and your work is now invalid."


Yeah, I was disheartened that she didn’t mention what the fallout for them was. The one that was about to defend was probably the most affected out of anyone in this story. The author ended up keeping her grant and got tenure. What happened to the students?


Luckily, they were masters' students, so the damage will be limited to a semester or so at most (in the US, generally a master's thesis isn't the most important part of a degree). If it was a PhD thesis... I don't want to think about that.


Our professor loves to tell the story of how one of his friends in grad school based his thesis on a series of experiments to be run by a mars probe. One morning my professor runs into him and he's just completely panicked. My professor asked what was wrong, and the guy replied "My thesis is a debris field on Mars".

So, it happens


I don’t know. It took me a few semesters to write mine and it was the main graduation requirement. This kind of situation would’ve set me back at least a year and cost me a ton of tuition money. It would’ve been economically and emotionally devastating for me.


It's great that she did the right thing and had the paper retracted, but this is still terrible on so many levels.

Maybe with a strong effect for every single subject a little more skepticism would have been warranted in the first place? Some manual spot checking if possible, or using a minimal independent implementation of the analysis code?

Who knows if she'd gotten her grant, her assistant professorship without the publication of this incorrect finding. Who knows who didn't get any of that because they were a bit too careful in their work.


> Who knows if she'd gotten her grant, her assistant professorship without the publication of this incorrect finding. Who knows who didn't get any of that because they were a bit too careful in their work.

On the other hand, if she hadn't wasted her time on this useless study she might have done more useful studies and her career would be better than it is right now. She might have gotten even better grants if she hadn't made this error, and maybe fewer other people would have gotten grants. Maybe those other people being more careful helped them rather than hurt them.

I don't see how speculating like this is very useful.


I think that underplays the difficulty of the work being done here. Or maybe it just underestimates the amount of luck involved. The downside is clearly greater than the upside. Why would you even imply the opposite?


I don't fully understand the relation between the difficulty of the work, the luck involved, and upside vs downside. I don't even know what you mean by the "downside", is that the downside of making the mistake, or the downside of not making the mistake?

I'm saying the downside of making the mistake (for her) might be larger than the upside of making the mistake. I don't think that's obviously wrong.

But my real main point is that speculation of this kind (in either direction) isn't very productive.


You're right that there is an opportunity cost here, and that it may be underestimated by most people, but you just can't give people much imaginary credit for things they might have done.


That was sort of my point, we shouldn't be giving imaginary credit to various people like throwaway285524 was doing. Like I said, I'm not very happy speculating in either direction.


> Maybe with a strong effect for every single subject a little more skepticism would have been warranted in the first place?

It is easy to think over these lines in hindsight[1], but it is much harder to do it when there are no known mistakes. Obviously they had some plausible hypothesis, which predicted and explained results. The more strong result is, the better for hypothesis.

I mean it was possible and maybe wise to suspect bug because results are too good. But it is hard from a cognitive standpoint. She describes bug as hard to find even after she found that results do not reproduce. It was even more hard to find this bug when there were less reasons to believe that there is a bug.

[1] https://en.wikipedia.org/wiki/Hindsight_bias


A control group, using same program but with a constant radius circle, should have revealed the error quite quickly no?


I'm not sure, I didn't understand fully the bug involved. Possibly such a control group would show the dependent variable as a constant plus some random noise. It is totally expected behaviour for variable.


> It's great that she did the right thing and had the paper retracted

I'm sorry but you seem to have skipped reading the main part of the article. The paper was not retracted:

"The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results."


Remember when scientists in Italy were sent to prison for not predicting an earthquake correctly [0]?

I know this isn't really what the article is about, but scientists are allowed to be wrong unless and until politics is involved.

[0] https://www.scientificamerican.com/article/italian-scientist...


I live in the city struck by the earthquake and I can tell you that what the article states is not correct. The scientists went under trial not for a wrong prediction but for downplaying the possibile risk after thousands of smaller earthquakes were recorded in the area.


While they may be scientists by training, predicting earthquakes is an application and not science itself.

Like don't be wrong about that. Similar to how you should not be wrong about planes falling out of the sky, bridges collapsing, or medicines killing all the patients.


I wonder to what extent this has effected their public policy around healthcare? Were doctors afraid of warning about severity for fears if they got it wrong and their advice was used to close schools/businesses.


we still trust software too much. I suspect that most software-controlled experiments are going to have errors like this! we should require that every experiment has at least two clean-room implementations of its logic, and a battery of smoke tests for common mistakes


We certainly trust scientists' software too much :-)



The most shocking part of this article was the revelation that the original article was not retracted. The entire conclusion and data were entirely wrong, yet the paper still stands. That’s really damning for the journal at least and likely speaks to a pretty bad culture in the wider field.

Kudos to the author for doing the right thing, but the fact that there seems to be no way to remove a paper that is blatantly false because retractions are reserved for deliberate misconduct is horrifying. Not only does this setup long term fucked up incentives (no downside to fraud if you paint it as a whoopsie), but it also harms all work that had cited that work and anyone doing literature reviews not realizing the ground other papers were standing on has dissolved away.


>The editor and publisher were understanding and ultimately opted not to retract the paper but to instead publish a revised version of the article, linked to from the original paper, with the results section updated to reflect the true (opposite) results.

I don't see what the problem is.


The problem is that the old copy is still floating around and by replacing it instead of retracting it, it’s not clear which papers are referencing the bullshit one and which ones are referencing the real results.

Not retracting hid the turd from bibliometric tools that could have easily notifies you of poisoned papers.


If anyone is curious like I was to read the actual paper and look at the source code, you can find them here:

https://osf.io/b94yx/

The paper authors made a mistake, fine. But the scientific process and peer review process should have caught it. It didn't. The author caught it accidentally and then luckily decided to come forward (bravo!). This begs the question of robustness of the whole scientific publishing process. I hope they adopt the practice of doing a blameless RCA and improve the scientific and academic peer review process.


>This begs the question of robustness of the whole scientific publishing process

It raises the question. Begging the question is an unrelated logical fallacy. Unfortunately, there have been a ton of examples of the peer-review process being essentially useless. Things like people deliberately putting things in to test if the paper is even being read and none of the reviewers notice it.


"This begs the question" is same as "This raises the question" – What is the logical fallacy in how it is phrased?


>"This begs the question" is same as "This raises the question"

No it isn't, that is a common error in modern English. Raising the question means bringing a question into focus. Begging the question means to assume the conclusion is correct in the premise.

https://en.wikipedia.org/wiki/Begging_the_question


Thanks for pointing out that wiki link. It lead me to another link

https://www.merriam-webster.com/words-at-play/beg-the-questi...

which seems to indicate my usage is quite acceptable in modern English.


Normalizing errors in language is how language degrades. We no longer have a word for literally, because people use literally to mean figuratively. We are losing the ability to talk about begging the question, because people think it means raising the question. The fact that English has degraded is not a reason to give up and let it get worse.


As a prescriptivist, surely you do not wish to assign "begging the question" that meaning then since it's wholly from a mistranslation from Latin https://languagelog.ldc.upenn.edu/nll/?p=2290

That information is also present in the Wikipedia link.


No, I do not wish to assign that meaning, that meaning was already assigned a long time ago. And you don't need to be a prescriptivist to want a language that is capable of communicating our thoughts with each other.


Haha, no, it's not a pejorative. It's literally one of two schools of thought: descriptivism vs. prescriptivism. What you're describing is prescriptivism.


I didn't suggest it was pejorative, and I know what it is. There are not two school of thought, that's a false dichotomy. That is like saying "there are two schools of thought, catholic and protestant". You can want to prevent the decline of language without insisting on trying to bring back already lost definitions like a prescriptivist.


Well done by the author and thanks for sharing, especilly because of the immense mental pressure. I think it is great that the journal was cooporative for this. It would be interesting to see a single journal implement a very easy undo-button for peer reviewed research and see how that reflects over a series of years compared to the current model. Although very ridgid, the scientiffic ecosystem is very robust, and we now see friction being removed with efficient pre-print servers. I remember some early covid papers were redrawn from bioRxiv by authors request.


What a nightmare. I bet this raises the hair on the back of the necks of a lot of other researchers. So much time and momentum can be invested into a research track like this.


Is this a nightmare? I read a story of someone who had a very reasonable, strong emotional response to the error, but ultimately got credit for coming clean and republished their results with new data. (And a different conclusion.)

This is exactly how I'd expect something like this to work -- the author isn't a bad person because they made an error. The co-authors aren't bad people because they failed to catch it. Software and science are hard, mistakes are going to happen.

If anything, I think the researchers learned valuable lessons, and are better researchers as a result. They have an anecdote they can share with more junior researchers about this frightening thing that happened to them, and use that to grow more people.

We should celebrate people who take the time to handle their mistakes properly and share the lessons openly.


I don't think she's a bad person, and I appreciate her response to the situation.

But building a year (years?) around a project, attending multiple conferences for it, bringing up a couple students on a research idea, then finding it's all a software bug? That's a nightmare.


Kudos for writing an article about that. Most scientists just don't care. Reproducible results are still considered overrated.


For me this story illuminates pretty clearly the fact that free-market ideals and academia don't mesh that well.


Is there a convenient place to input web url to get at the article text? I saw someone post a few days ago ...


Guessing you're thinking of https://archive.today?

That gives you a web page where you can give the full URL to the paywalled one, and it'll give back an "https://archive.xx/address" URL where you can see the full thing.

eg: https://archive.vn/0ikco in this instance


I use DuckDuckGo's iOS browser. If you use it's Clear Data functionality, most sites will display the article (because they think you're a first-time visitor).


You're thinking of Outline.com


Yes! This is what I was looking for.



I couldn't get it to work with authors post, but first time I am hearing of this site. Thank you for sharing!


Now if only politicians learned a thing or two from this guy. Not possible however: scientists talk to brains, politicians talk to bellies and appeal to instincts, which require them to appear strong by showing self confidence and never admit any errors.


I'm surprised if updating papers to correct mistakes is not the common path. Especially with software being a crucial part of it. I mean, how many software products do we know of that shipped flawlessly in v1.0?


So this error was in the code that actually ran the experiment, not the analysis. The experiment was effectively doomed from the beginning. The mistake amounts to failing to calibrate and test your experimental apparatus, which is a little hard to forgive, since it would have taken hardly any time or effort. Perhaps I’m being harsh but scientific enquiry is too important to be satisfied with conducting unrigorous experiments then apologising, which of course one could do ad infinitum.


I agree. I write code all the time and I've never had a single bug. She was careless


> The data set was gorgeous — every single one of the 96 participants showed the effect.

Ideally, that would have been an uh-oh moment.


Pro tip - When “every single one of the 96 participants show[s] the effect“ u have a technical artifact. This is scientific common sense.


Not really. If you are counting fingers and toes you would not be surprised if all 96 participants had 10 of each. It is all about context.


It is indeed about context, and the context is front-line research in an established field - so the chance that your effect is as prevalent as the standard number of fingers an toes in humans and yet undiscovered is what should give you pause.


But you would be surprised if they all had 11, 10 is your null hypothesis. It’s not context, it’s proper controls and the scientific method. This was very very careless


Your rebuttal stands but not your point, for the comment you were replying to didn't properly contextualize their argument in a scientific experiment. It's more like serve participants canned vegetable soup, but in the non-control group warm it up before serving. Here you'd reasonably expect the non-control to be universally more favorable.

They did have a null hypothesis and a control group. The problem was that the non-control was different from the control in a way they were careless with, that the scientific methodology does not catch (that peer review and replication ought to have caught). e.g. if the favorable results were better explained by subjects seeing soup taken out of the pot vs. from the can rather than the temperature itself.


> If you are counting fingers and toes you would not be surprised if all 96 participants had 10 of each.

It is absolutely possible to find people with an abnormal number of fingers or toes.

https://en.wikipedia.org/wiki/Oligodactyly

https://en.wikipedia.org/wiki/Polydactyly

If 100% of the studied humans have 10 fingers and 10 toes, something's wrong with the study. In this case, the problem is the extremely small sample size: it simply doesn't represent the general human population. In the article author's case, a programming error caused these results.

0% and 100% are always suspect no matter the context they appear in.


You're nitpicking the example, rather than responding to the underlying point.


Whenever my results seem too good, I do a whole new round of checks above my usual. That said, I've had some real ones with effects that large...its just usually that effect is known already and not of immediate interest. Large effect sizes in psychological sciences usually means it has been discovered already :)


Since when does medium require sign-in to read? Anyone have a different link to the post?


Disable JavaScript or use a text browser like Lynx. I've put the article in this paste: https://pastebin.com/raw/pe6dfUtJ


Clear your medium cookies. It's a "lots of visits" thing.


This could possible be gamed/abused. Make great discovery - get grants and tenure - only to retract the finding - and keep the money.


She said this never happens. This happened all the time. It's just that she was the first scientist to take the honest route and do a retraction rather than letting it slip. I'm impressed with her integrity, but surprised by her naivete.


She is not the first scientist to fess to a mistake. Heck, one of my colleagues went through the same thing last year, and he also did the right thing, and other scientists also appreciated his open honesty about the matter, even though his mistake meant the retraction of a high impact article.


While I applaud Julia for retracting the paper, I do wonder if she handed back the NIH grant she was given off the back of this false result. Some researcher just missed out on a NIH grant that would have been funded if not for this error.


The purpose of this grant was to run follow-up studies and to work on productionizing the intervention. The purpose of follow-up studies is to detect if result extends. The follow-up work detected result was due to experiment setup error. Grant worked.

Actually, encountering this I think I suddenly understand why people get upset about Kickstarter projects not delivering. It is a misunderstanding of a probabilistic situation.


If the NIH grant scheme was not a zero sum game then this would be reasonable. A scientist with real results missed out on a grant that they would have got if Julia had not made this error.

There is a fundamental issue here that is not being discussed is that is the science funding system incentivises people rushing results out and not check if they are real. If Julia had been more careful she would have found the results were false, not published, and not received a grant. So much of the problems with a lack of reproducibility in science is the direct result of people rushing to get out “novel” data and not checking if it is real because if they do they don’t get funded.


This is just an optimization problem with where you place the incentives. The follow-up study funding is not exclusively for false null hypothesis rejection due to chance. It's also to allow for some amount of error in experimental design or practice. No one wants 0% experimental design / practice error because that will harm total experiments done per unit time which is also a thing we want to go up. No one wants only certain results because we want novel results / unit time to go up too. It's just multivariate non-linear optimization.

Yes, someone whose conclusion was not due to error in programming may have gone without a grant. That's perfectly all right. We optimize policy in the aggregate.

You can make an argument that says that we're not correctly placed on the manifold but no single situation will be convincing for that and you'll need to make some case that a policy change to move to some other point in this space will yield a positive total improvement.


True, but I would personally place more emphasis on avoiding false data being published and less on novelty. If the consequence of publishing something wrong were greater, and the costs of taking the time to do things right less, then we might see more science we can trust published.


Not when people's livelihoods are on the line. You would simply get people to not report.

What you are asking for is the equivalent of saying "firing programmers when a bug is found will result in better software"

In science it will result in much less interesting science being done, because this will desincentivize risk.

The big problem in science funding is that it already incentivizes low risk projects (you should have preliminary results that proof it works...) so let's not make it worse


I am arguing that the current funding structure encourages poor quality science. Right now it is better to pump out papers as quickly as possible rather than taking the time to check the data is not flawed.

From a software perspective it is like paying developers by the number of lines of code delivered and telling them to not worry about bugs. Not an approach that is likely to deliver quality software.

What we need in science is to find a way that people have the time and incentives to publish results that are as accurate as possible.

Somewhat off topic I do wonder if we would get better software if we fired programmers if bugs were found. I suspect we would not get much code written, but the code that was written would be very high quality.


Every indication is this was an honest mistake that was discovered due to the grant. The costs associated with the experiment where still there. Taking back grants in this situation would disincentivize scientific honesty.


I wasn’t suggesting the grant be taken back, but that she voluntarily hand it back.


Unless she is independently wealthy there would be a strong incentive for her to not admit the error if there was an expectation to hand the money back. Where is it supposed to come from?


There's no way to avoid such incentives here. Being allowed to keep the grant creates an incentive to publish incorrect results in the first place. "Gosh this seems too good to be true, but let's wait till the system has invested another million dollars in my career before investigating the process too closely".


This is exactly what the current funding system encourages. If you want to be funded you are almost forced to pump out the papers as fast as you can with the bare minimum of checking.


I am not suggesting she hand back the money already spent, just that she might think about handing what is left.

I would hand back a grant that was given due to a false result that occurred because of my error. Julia should be praised for doing the right thing with the paper, I just think she should think about going further.


I don't think so. I am not returning my salary when I screw up at work and I have made bad decisions that cost 100s of thousands. As long it comes from an honest effort mistakes are part of the game.


I think you missed the above poster's point. Big grants provide funding for multiple years. It can get cancelled rather than continued. I'm rather surprised it wasn't. They might also cease making payments that haven't been made yet in the current year. I know people this has happened to, purely due to govt shutdowns and such unrelated to anyone screwing up.

It would be more like losing the remaining payments of a multi-year contract because it gets broken. Or in your analogy, like losing your job.


Some days at work, if I wrote buggy code, I go to my boss and return my paycheck. No! Wait, that never happens.

Sometimes there are going to be mistakes.


It is not quite the same thing. If your boss had sold your code to a customer who paid based on the buggy result then would the customer be unreasonable seeking a refund? Would your boss be ethical offering the customer a refund?


But she did not sell NIH a product with guarantees, that's not how science works. If she you make definite promises about the outcomes of your research I argue you are not doing science.

Apart from that many great discoveries were made based on initial mistakes.


Most software has some bugs. No one asks for a refund. They apply the same reasonable expectation that the people who funded the study did. Whoops mistakes happen.


No way in related to the mistake in the article. Mistakes can and do happen.

But if you sold me software to solve a problem and I paid you for it. You wouldn't refund me if the software didn't solve my original problem? I feel like that's scam but I could be wrong. I didn't get what I was promised and hence I am entitled to a refund.

If you got a laptop with broken keyboard or the privacy friendly app that you paid extra for turned out to be facebook, would you not want your money back?

You can't just use mistake as an excuse if you promise "specific" things like written in your implicit/explicit contract.


If you want to construct a story — which we both agree strays from the article — where I agreed to provide a specific service and if I failed the agreed remedy was to refund the payment, then, yes I agree to refund you with a cherry on top.

I think you are correct to want remedy. A big fix, a refund, a workaround, an apology, a million dollars, or a withdrawal of the study. Depending on circumstances.

In my world, most of the implicit contracts are. Shit happens and try to fix it. Yours might be refund me.


You sound like someone who never bought software before. You don't get your money back. You might get a bug fix, but the contract probably says it is sold as is.


False advertising is a crime. I don't know where you live but that alone will stop the above problem. And you can get refunds regardless on most major platforms of software distribution available to masses.

Many countries have pro-consumer laws which protect them from exactly this.


Meanwhile, you destroy lives as post-docs, research associates funded by that grant lose their jobs suddenly and have to scramble to find a new one. Again, you appear to want dis-incentivize honesty.

That this researcher was honest makes their future research twice as productive in my eyes. Not only are they less likely to make that kind of mistake again, you can probably trust them to not lie about their research.


Wouldn't intent matter, with false advertising? If the advertiser fully believed in what they were advertising, only to later discover it didn't do what they said it did, is that false advertising?


If the software doesn’t do what it is advertised to do you do get your money back.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: