Almost all researchers use statistical methods but only few researchers receive intensive training in such methods. This has resulted in a wave of statistical malpractice and butchered numbers that has coursed through the ranks of even the most prestigious scientific journals.

Fierce competition among journals for the next ground-breaking study that uses “too-good-to-be-true” data by corner-cutting researchers is eroding trust in the reliability of published scientific research.

Here is an excerpt from a report published in Study International News:

Meanwhile, Statistics Done Wrong, described as “a guide to the most popular statistical errors and slip-ups committed by scientists every day, in the lab and in peer-reviewed journals”, notes that “statistical errors are rife” and that they are prevalent in “vast swaths of the published literature, casting doubt on the findings of thousands of papers”.

This may partially be due to a lack of adequate training in statistics. They note that “few undergraduate science degrees or medical schools require courses in statistics and experimental design – and some introductory statistics courses skip over issues of statistical power and multiple inference.

“This is seen as acceptable despite the paramount role of data and statistical analysis in the pursuit of modern science; we wouldn’t accept doctors who have no experience with prescription medication, so why do we accept scientists with no training in statistics? Scientists need formal statistical training and advice.”