There is a problem in the efficacy of science in terms of correcting itself. Whilst in the very long-term erroneous and important science does get corrected, in the short and medium term papers continue to get cited even when the science they describe is debunked, falsified or even retracted.
The problem seems to be (a) that severe criticism or retraction notices are not easily detectable on the page where the paper is read (a problem compounded when there are different copies of the paper around on pre-print services etc.) and (b) that papers get more readers (and hence citations) the more it is cited, so once a paper starts getting cited a lot, its citation rate is self-sustaining.
- (Schneider et al. 2020) report about a paper that was retracted due to falsified clinical trial data, but has continued to be cited in the 11 years after that - mostly with citations that give no indication of its retraction or weakness (in 96% of citing articles). This is not so surprising given the main page of the paper gives no indication that it is retracted (https://journal.chestnet.org/article/S0012-3692(15)49623-0/).
- The case I know personally about is (Riolo et al 2001), which was heavily criticised (e.g. Roberts & Sherratt 2002, Edmonds & Hales 2003). The original model is very brittle and it is clear the authors did not understand their own model or results (changing a "<=" to a "<" in the formulation makes the effect they report disappear because the former forces agents with 0 tolerance to cooperate with clones of themselves and hence proliferate). Despite this the original paper is cited over 800 times according to Google citations.
(Schneider et al. 2020) report on some of the literature reporting other cases. Retraction Watch documents retractions by journals and indeed has a "leader board" for the 10 most cited retracted papers (https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/), with one paper being cited 1146 times after it was retracted.
That good science gets cited and then attracts more readers is clearly a good thing, but when the reverse happens the communication of its (lack of) quality does not work well. Firstly, there is the laziness of many researchers who cited papers without reading them (basing their citations on the citations of others). Secondly, when a paper is simply severely criticised this is difficult to know without reading many papers that cited a paper - a very time-consuming process. The only way in which the correction of bad science works is if the paper that criticises such research is cited many more times than the original.
Some mechanism whereby the quality of papers is communicated post-review is needed. The review process can only stop some bad science from being published because completely checking research is infeasible (unless this is a core part of one's own research).
Edmonds, B. and Hales, D. (2003) Replication,
and Replication - Some Hard Lessons from Model Alignment.
Journal of Artificial Societies and Social Simulation 6(4) (http://jasss.soc.surrey.ac.uk/6/4/11.html).
Riolo, Rick L., Michael D. Cohen, and Robert Axelrod. "Evolution of cooperation without reciprocity." Nature 414.6862 (2001): 441-443. https://www.nature.com/articles/35106555
Schneider, J., Ye, D., Hill, A.M. et al. Continued post-retraction citation of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data. Scientometrics 125, 2877–2913 (2020). https://doi.org/10.1007/s11192-020-03631-1
Roberts, G., Sherratt, T. Does similarity breed cooperation?. Nature 418, 499–500 (2002). https://doi.org/10.1038/418499b