Purpose: Firms increasingly integrate a wide range of actors in the early ideation and concept creation phases of innovation processes leading to the collection of a large number of ideas. This creates the challenge of filtering the most promising ideas from the large number of submissions. The use of external stakeholders into the evaluation and selection of submissions (i.e., open evaluation) might be a viable alternative. This paper provides a stateof-the-art analysis on how such open evaluation systems are designed and structured. Design/methodology/approach: Since open evaluation is a new phenomenon, an exploratory qualitative research approach is adopted. 122 instances of open evaluation in 90 innovation contest cases (selected out of 400 cases) are examined for their design elements. Findings: This research reveals that open evaluation systems are configured in many different ways. In total, 32 design elements and their respective parameters are identified and described along the six socio-technical system components of an open evaluation system. This study allows for a comprehensive understanding of what open evaluation is and what factors need to be taken into consideration when designing an open evaluation system._x000D_ Practical implication: Scholars and professionals may draw insights on what design choices to make when implementing open evaluation. Originality/value: The comprehensive analysis performed in his study contributes to research on open and user innovation by examining the concept of open evaluation. In particular, it extends knowledge on design elements of open evaluation systems. Keywords: Open evaluation, open innovation, innovation contests
The proliferation of innovation contests has fostered community-based idea evaluation as an alternative to expert juries to filter and select new product concepts at the fuzzy front end of corporate innovation. We refer to this phenomenon as open evaluation, as all registered participants can engage in jury activities like voting, rating, and commenting. While previous research on innovation contests and user engagement includes participant-based evaluation, the iestigative focus so far has not been on this phenomenon. Access to jury activities in open evaluation practice contradicts innovation theory, which recommends careful selection procedures to establish expert juries for assessing new product concepts. Additionally, little is known about contingency factors that influence the performance and acceptance of open evaluation's results. To address these two questions on the objectives and contingency factors for open evaluation of new product concepts, this study applies exploratory multiple-case research of open evaluation in nine innovation contests. Data collection encompassed expert interviews and complementary sources of evidence. Results indicate that firms pursue six distinct objectives to support participant-based generation and selection of new concepts. In addition, eight contingency factors influence the performance of open evaluation and the acceptance of its results. Finally, results showed open evaluation output to efficiently complement jury decisions in filtering and selecting ideas for new product development.