Carnegie Mellon University

yan huang

November 27, 2018

Public Feedback Late in Crowdsourcing Contest Garners Better Results

Typically, when a company without dedicated graphics designers needed a logo, they would need to pay a freelancer or agency to create it. But a growing trend for businesses seeking to outsource occasional professional assistance is crowdsourcing: The company solicits solutions via an online contest, and pays the winning entrants for the selected submissions. By engaging multiple sources, the business is more likely to find higher-quality options and pays only for existing assets they have already approved.

Yan Huang, Assistant Professor of Business Technologies, and her former colleagues at the University of Michigan’s Ross School of Business, are studying how providing information to contest entrants at various points during the campaign can garner more successful solutions. In one paper — titled “The Role of Feedback in Dynamic Crowdsourcing Contests: A Structural Empirical Analysis” — the authors find that when companies provide publicly available ratings of entries, the resulting product is more successful.

“Providing performance feedback to contest participants in general has a positive impact on the contest outcome and therefore should be encouraged,” Huang said. She and her colleagues studied online logo design contests, and created a dynamic structural model to analyze the result of quantitative feedback at various points during the campaigns. One key element of these contests is that new entrants are able to join and existing entrants are able to submit new entries — either revisions of their previous submissions or brand new designs — at any point during the contest.

The model showed that contest holders may be able to offer rewards that are about one-third lower if they make ratings available either throughout the entire time of the contest or only during the second half, as compared to offering no feedback, or only doing so during the first half of the contest. Moreover, providing ratings only during the latter part of a contest results both in more entries overall and in more high-quality entries than doing so for the entire length of time.

“Revealing the performance of a contest participant helps guide that participant’s submission decisions, but can also discourage other potential entrants from joining the contest and other participants from making additional submissions,” Huang said. “The late feedback policy attains the former benefit while mitigating the latter problem, by only giving feedback after many solvers have had a chance to enter.” 

Read more in the fall/winter 2018 Tepper Magazine.