As a way of presenting data, top-box scoring is easy to understand especially for a non-technical audience. But top-box scoring is also a severely flawed measure that could lead users to poor decisions that are erroneously thought of as data-driven.
Ken Faro and Elie Ohana describe in detail why top-box scoring should not be the primary way to communicate results to clients in this article in Quirk’s Media:
The real question for market researchers is: Why do they bin consumers (top-box vs. not-top-box) when the construct we are measuring (e.g., ad-liking or purchase intent) varies along a continuum from low to high? Why take a construct such as “overall liking of the ad,” measured continuously on a seven-point Likert scale, and break it into “like it” (box 7) vs “dislike it” (boxes 1-6)? From a conceptual standpoint, the statistic we are using doesn’t fit the phenomena we’re studying.
Even if we put aside the argument that “psychological constructs vary along a continuum and therefore we should measure them on a continuum,” we run into a second methodological problem: By using top-box scoring we remove measures of individual difference in favor of counting “similar” people. That is, measuring a trait allows us to see how different people have varying levels of a given trait – it’s about observing how individuals are different. This is vastly different than the practice of top-box scoring, which is used for the purpose of calculating percentages of similarly grouped individuals. For example, if 56 percent of people indicate they are a democrat on a survey, it is assumed these people have similar political beliefs and are a discernible group when compared to others who indicate that they are republican.
In the years both authors have been practicing market research, one dominant reason has surfaced for why people request that we shift our reports from averages to top-box scores: Stakeholders say, “Averages are too hard to understand. Top box is more intuitive.” The fact that we give in to this reasoning suggests one of two things. One, researchers do not have a good conceptual understanding around which statistic should be reported and why. Consequently, they find it acceptable in all cases to report top-box scores. Or two, researchers sometimes lack the ability to articulate how to interpret averages effectively.
As market researchers, it is our job to understand the advantages and disadvantages behind both averages and top box scores. It’s also our job to understand these well enough that we are able to explain both statistics to technical and non-technical audiences. An inability to effectively communicate the measures that are most appropriate for our clients is a big problem for our industry.