Crowdsourced Innovation: How to Increase and Measure its Success?

Crowdsourcing innovation from employees or customers is increasingly popular. In recent weeks, crowdsourcing has been one of the frontline responses to the coronavirus outbreak. For example, on April 1st, 2020, NASA announced that it was going to be using its crowdsourcing platform (NASA@work) to “collect creative ideas about new ways to address the COVID-19 crisis and the various problems it presents”. Many other firms, NGOs and governmental agencies have announced similar COVID-19-related initiatives, including Roche Canada (Apr 2nd, 2020), Nesta – the National Endowment for Science, Technology and the Arts in the UK (Apr 14, 2020), the British Government (Apr 17th) and many smaller initiatives in platforms like Kickstarter and Indiegogo. Yet, how to measure the success of these initiatives?


Success Metrics: The “Volume = Success” Heuristic

When crowdsourcing innovation, innovation executives tend to gauge success through measures such as the “number of ideas” or “number of participants” (see e.g., the example below from MyStarbucksIdea, clearly displays “volume-based success metrics”).




While a high number of ideas is certainly laudable, at MTI² we always believed that “quality” of the idea maturation process is a better predictor of success. Yet, we were surprised by the lack of rigorous evidence testing this assertion.


The ESE Innovation Tournament: Evidence from the Field

At the occasion of the Erasmus School of Economics centennial, we were encouraged to design a crowdsourcing initiative to discover innovative solutions for the future of the Erasmus School of Economics. That’s when we thought: “This is the right opportunity to test some of our hypotheses regarding engagement and maturation in the field using a scientifically rigorous experimental approach.” Two of us (Stefan Stremersch and Nuno Camacho), together with colleagues from the University of Maryland and U. Washington, asked all students of the Erasmus School of Economics to contribute with ideas to improve the school by 2030. They had to submit their ideas on a commercial platform (Cognistreamer) and then, over the course of 5-6 weeks, they received online coaching and feedback on how to improve their idea. The tournament ended with a selection of five finalists who pitched their idea to a grand jury composed by the Dean, other school administrators, the president and vice-president of the school’s largest student association and three of the paper’s authors. Two of these ideas were selected for further implementation.


When designing the ESE Innovation Tournament, we decided to test our hypothesis that ideators’ “participation intensity” (i.e., their level of engagement) is a better predictor of idea quality than the mere number of ideators or ideas. In addition, we wanted to test the role of moderator feedback provided to ideators. We expected moderators’ feedback to be a critical predictor of engagement. To test these hypotheses, throughout the tournament we experimentally manipulated the type and the timing of moderator feedback given to each ideator and continuously measured ideators’ engagement (or participation intensity) by tracking the effort they put in the innovation platform (which we could measure by each ideator’s number of pageviews in the platform and frequency of updates to her idea). We then replicated this experiment in a “forced participation” experiment in one of the authors’ classes, to rule out alternative explanations. We also conducted a large-scale survey among innovation executives at 1,519 firms. Let us highlight three particularly interesting results from our study.



Result #1: Engagement > Volume


The study led to very interesting results, which is the reason why it was ultimately published in 2019 in the Journal of Marketing*. The first result confirms that the “volume = success” heuristic may lead firms to focus on the wrong success metrics in crowdsourcing. Instead of obsessing about the volume of ideas submitted, firms should focus on stimulating ideators’ engagement -- that is, how often they interact with the tournament's platform. This finding comes from the authors' large-scale survey of innovation executives at 1,519 firms. The overwhelming message from the survey's respondents was that participation intensity is the key determinant and indicator of idea quality in innovation tournaments. Although engagement data can be readily obtained from most platforms, firms tend to limit their reporting to the number of ideas and number of ideators. That is to say, they simply don't pay as much attention to when moderator feedback encourages idea revisions.


Result #2: You must be cruel to be kind


Our findings regarding moderator feedback fly in the face of some conventional wisdom. Negative feedback, with constructive criticism, was much more effective in sustaining engagement than positive feedback was. Meanwhile, the oft-employed "sandwich approach" -- i.e., negative feedback made more palatable by surrounding it with positive praise -- had comparatively little impact. In one experiment, almost four times as many participants who had received negative feedback from a moderator updated their ideas, as compared with participants who had received positive feedback.


Result #3: Frontload your constructive criticism


The timing of the criticism was also important. Early negative feedback increased engagement whereas late negative feedback didn't. In one experiment participants who received negative feedback during the early stages of the tournament were 20 percent more likely to update their ideas than those who received negative feedback in the closing stages.

These findings have important ramifications for companies organizing crowdsourcing innovation initiatives. Rather than focusing almost exclusively on the volume of participants and ideas, firms should track and incentivize ideator engagement (or participating intensity). To encourage ideators to remain actively engaged in the tournament, firms should carefully design and deploy a moderating feedback strategy with sufficient room for constructive criticism, especially early in the process.




You can read more here: https://www-ama-org.eur.idm.oclc.org/2019/03/20/3-strategies-to-make-innovation-tournaments-more-successful/


If you would like to hear more about the results of this study from us, we are always happy to jump into a quick call. Please send your request to nuno.camacho@mti2.eu.

© 2019 MTI²

MTI²_logo_PNG