Deborah G. Mayo, a philosopher of error statistics, expounds on the “myth of objectivity” in this excerpt from long article in errorstatistics.com:
We invariably sully methods of inquiry by the entry of background beliefs and personal judgments in their speciﬁcation and interpretation. The real issue is not that a human is doing the measuring; the issue is whether that which is being measured is something we can reliably use to solve some problem of inquiry. An inference done by machine, untouched by human hands, wouldn’t make it objective in any interesting sense. There are three distinct requirements for an objective procedure of inquiry:
Relevance: It should be relevant to learning about what is being measured; having an uncontroversial way to measure something is not enough to make it relevant to solving a knowledge-based problem of inquiry.
Reliably capable: It should not routinely declare the problem solved when it is not (or solved incorrectly); it should be capable of controlling reports of erroneous solutions to problems with reliability.
Able to learn from error: If the problem is not solved (or poorly solved) at a given point, the method should set the stage for pinpointing why.
Yes, there are numerous choices in collecting, analyzing, modeling, and drawing inferences from data, and there is often disagreement about how they should be made, and about their relevance for scientific claims. Why suppose that this introduces subjectivity into an account, or worse, means that all accounts are in the same boat as regards subjective factors? It need not, and they are not. An account of inference shows itself to be objective precisely in how it steps up to the plate in handling potential threats to objectivity.