Researcher Michael Betancourt proposes the need for a ‘principled’ workflow based on Bayesian inference.

Betancourt describes his principled workflow model in the following excerpt. Those interested in Betancourt’s full study can find it on GitHub:

What properties characterize a principled probabilistic model? First and foremost the model should be consistent with our domain expertise, capturing not necessarily all of our knowledge of the experimental design but just enough to avoid unreasonable inferences. Such a model then allows us to critique the experimental design itself, including whether or not our computational tools will be sufficient to fit prospective data and whether or not our experimental design admits answers to relevant questions. The utility of these analyses, however, relies on the model being rich enough to learn the structure of the true data generating process beyond our domain expertise, and sometimes in spite of it.

In order to validate a given model we need to ask four questions.
Question One: Domain Consistency Expertise
Are the assumptions inherent in our model consistent with the relevant  elements of our domain expertise?
Question Two: Computational Faithfulness
Are our computational tools sufficient to accurately fit the model?
Question Three: Model Sensitivity
How do we expect our inferences to perform over the distribution of           reasonable observations?
Question Four: Practical Model Validity
Is our model rich enough to capture the relevant structure of the true data  generating process?

The first three questions are asked before our measurement process resolves to a particular observation, considering instead the robustness of the model to the possible observations. Even if we our observation is immediately available, however, these questions are important in understanding how to safely utilize resulting inferences. The final question is asked once the observed data is available and the resulting inferences have been constructed.