New research reveals that when crowdsourcing an idea it may be more fruitful to pay attention not to the most popular response but to the most “surprisingly popular” answer. Wharton researchers still believe in the “wisdom of the crowd” but insist that the right sort of questions should be asked.
Marketing professor John McCoy of Wharton explains why his new method for crowdsourcing leads to better and more accurate results in this interview with Knowledge@Wharton:
If you think about doing something like majority vote, what you’re doing is just taking the most popular answer. You’re taking the most frequent answer that people give. We say instead that the right thing to do is take what we call the surprisingly popular answer. The idea is that you want to ask the crowd for two bits of information — not just for their own answer, but also for their predictions about the answers of other people in the crowd. Then, taking the surprisingly popular answer means looking at both the actual vote frequency and the predictive vote frequency, and choosing the answer where the actual vote frequency exceeds predictive vote frequency.
I can give an example. Consider a simple factual question like: Is Philadelphia the capital of Pennsylvania? The correct answer is no. The capital is Harrisburg. But many people think it is, because Philadelphia is a large, populous city. Most people know about Philadelphia. When you ask that question to a crowd of people, as we did with MIT students, only about a third of the crowd gets the correct answer. We can also, though, look at the crowd’s predictions about what people in the crowd will do. If you ask everybody in the crowd to predict what fraction of people will answer no, the crowd thinks that only 23% of people will answer no. So, our method says to select no, the correct answer, because even though it’s not the most popular — only 33% of people endorsed it — it’s the most surprisingly popular answer. The 33% is far more than the prediction of 23%.
[…]
Some companies at the moment are doing things like simple majority votes or weighting votes by competence. Other companies are using things like prediction markets. So, here’s a very simple method where you just ask said group of employees for their own answers to some questions — some market forecast, say — and you ask them to make predictions about their colleagues. Then you simply combine these two pieces of information.
One of the nice things [about this method] is that it’s got far greater scope than a lot of the things that the companies are currently doing. If I’m a big company using prediction markets, I’m very limited with the kinds of questions that I can ask because, for instance, I’ve got to immediately be able to pay off people with respect to their answers — did this event happen or not happen? Did this product sell or not sell? With our method, you don’t need anything like that. All you need is answers, predictions, and you can immediately apply it.