Science, when conducted rigorously, is the best means for humans to explore and explain our reality. The key word being rigorously. Without that, our understanding of reality is skewed because it’s based on false facts. So, when I see something like this, I get concerned. The Economist published a lengthy article detailing critical problems in the current state of science. I would heartily recommend that you RTWT.

Various factors contribute to the problem. Statistical mistakes are widespread. The peer reviewers who evaluate papers before journals commit to publishing them are much worse at spotting mistakes than they or others appreciate. Professional pressure, competition and ambition push scientists to publish more quickly than would be wise. A career structure which lays great stress on publishing copious papers exacerbates all these problems. “There is no cost to getting things wrong,” says Brian Nosek, a psychologist at the University of Virginia who has taken an interest in his discipline’s persistent errors. “The cost is not getting them published.”

Two of the biggest issues that faces science is the current process of peer review as well as the undesirability of doing replication experiments. Currently, peer review is rife with errors that provide gaps for bad papers to be published.

…in a classic 1998 study Fiona Godlee, editor of the prestigious British Medical Journal, sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ’s regular reviewers. Not one picked out all the mistakes. On average, they reported fewer than two; some did not spot any.

Replication, which is supposed to be the main corrective or confirmatory agent in the scientific method is disdained.

Journals, thirsty for novelty, show little interest in it; though minimum-threshold journals could change this, they have yet to do so in a big way. Most academic researchers would rather spend time on work that is more likely to enhance their careers. This is especially true of junior researchers, who are aware that overzealous replication can be seen as an implicit challenge to authority. Often, only people with an axe to grind pursue replications with vigour—a state of affairs which makes people wary of having their work replicated.

To me, this is definitely where voluntary associations could come into play. I would love to see something like United Laboratories be established for the express purpose of replicating and validating experiments. Or multiple organizations that can give experiments a “stamp of approval.” I don’t know if this is possible in the current environment, but I would rather a non-profit of some type take this on rather than wait for one of the myriad of government agencies that would be chomping at the bit for a chance to regulate science.