Faced with much uncertainty at the start of a machine learning (ML) project, tech gurus Emmanuel Ameisen of Insight Data and Adam Coates of Khosla Ventures advise ML engineers to follow an ML Engineering Loop that will cycle through phases of analysis, approach selection, implementation, and measurement. By doing this, Amiesen and Coates say engineers can rapidly discover the best models and respond quickly to changes in parameters.
The earliest solutions do not need to be perfect or elegant immediately. At the outset, it just needs to be simple and quickly testable. This allows ML teams to advance more quickly, narrow the field of available solutions, and cut through the uncertainty. Further iterations of the ML Engineering Loop will not only refine the solution but will also keep the project on track when the inevitable setbacks take place.
For a lengthier exposition on the ML Engineering Loop, here is a short excerpt and a link to the article in Insight:
Success for an ML team often means delivering a highly performing model within given constraints — for example, one that achieves high prediction accuracy, while subject to constraints on memory usage, inference time, and fairness. Performance is defined by whichever metric is most relevant to the success of your end product, whether that be accuracy, speed, diversity of outputs, etc.
When you are just starting to scope out a new project, you should accurately define success criteria, which you will then translate to model metrics. In product terms, what level of performance would a service need to be useful?
The purpose of the ML Engineering Loop is to put a rote mental framework around the development process, simplifying the decision making process to focus exclusively on the most important next steps. As practitioners progress in experience, the process becomes second nature and growing expertise enables rapid shifts between analysis and implementation without hesitation. That said, this framework is still immensely valuable for even the most experienced engineers when uncertainty increases — for example, when a model unexpectedly fails to meet requirements, when the teams’ goals are suddenly altered (e.g., the test set is changed to reflect changes in product needs), or as team progress stalls just short of the goal.