The More the Merrier? The Effects of Community Feedback on ...

10
The More the Merrier? The Effects of Community Feedback on Idea Quality in Innovation Contests Isabella Seeber University of Innsbruck [email protected] Daniel Zantedeschi Ohio State University [email protected] Anol Bhattacherjee University of South Florida [email protected] Johann Füller University of Innsbruck [email protected] Abstract Innovation contests represent a novel and popular approach for organizations to leverage the creativity of the crowd for organizational innovations. In this approach, ideators present their initial ideas to a global community of potential users, and solicit their feedback for idea improvement or refinement. However, it is not clear which types of feedback lead to the development of better ideas and which contingent factors moderate these relationships. In this study, we examine the role of community feedback on idea development in online innovation contests, by using feedback intervention theory to develop a set of hypotheses relating community feedback and idea quality, and then testing those hypotheses using data from ZEISS VR ONE innovation contest. Our analysis suggest that task information feedback does lead to improvement in idea quality, while task learning and task motivation feedback does not, and the number of users providing feedback moderate the relationship between feedback and idea quality. Implications of our findings for theory and practice are discussed. 1. Introduction Innovation contests are web-based competitions where innovators present interesting design solutions for business challenges on the Internet, and a global community of users respond by voting on these ideas, suggesting opportunities for improvement or implementation, and in general, contributing their collective input on idea design and development [1]. Innovation contests are becoming popular as organizations seek new ways to leverage the collective creativity of their customers, employees, and user 1 https://www.press.bmwgroup.com/deutschland/article/detail/T00224 73DE/ communities (the “crowd”) to generate new ideas for product and service innovations [2]. Indeed these contests represent a popular form of crowdsourcing (of ideas) in the emergent discipline of crowd science [3]. Innovation contests are not entirely a new phenomena. For example, BMW uses Motorrad Innovation Contest 1 for soliciting customer input in designing parts and accessories for BMW’s Ducati and Triumph lines of motorcycles, which led to twin “Boxer” engines and many other designs. Starbucks employs the MyStarbucksIdea portal to source product and service ideas from customers, which generated over 150,000 ideas and implementation of 277 new innovations between 2008 and 2013, including skinny beverages, new flavors like Hazelnut Macchiato, and splash sticks for protecting clothes from coffee spills [4]. IBM uses Innovation Jam to brainstorm ideas for new technologies, in which 150,000 employees, partners, and clients generated more than 46,000 ideas within 72 hours in 2006, and led to ideas such as real- time speech translations in multiple languages and three-dimensional online product demonstrations [5]. The popularity of innovation contests have also led to the development of many third-party websites, such as Innocentive, Freelancer, and 99designs, which host innovation challenges for individuals or organizations where individuals participate with the prospect of winning monetary or non-monetary rewards. The growing prominence of online contests and crowdsourced innovations reflects innovation as a social collective process initiated and driven by users, in contrast to the traditional research and development process involving specialized staff and specialized laboratories [6]. Online innovation contest websites provide specific design features that allow idea owners or “ideators” (the person generating the idea) to present their initial ideas and community members (lead users, 4334 Proceedings of the 50th Hawaii International Conference on System Sciences | 2017 URI: http://hdl.handle.net/10125/41686 ISBN: 978-0-9981331-0-2 CC-BY-NC-ND

Transcript of The More the Merrier? The Effects of Community Feedback on ...

Page 1: The More the Merrier? The Effects of Community Feedback on ...

The More the Merrier? The Effects of Community Feedback on Idea Quality in Innovation Contests

Isabella Seeber

University of Innsbruck [email protected]

Daniel Zantedeschi Ohio State University

[email protected]

Anol Bhattacherjee University of South Florida

[email protected]

Johann Füller University of Innsbruck

[email protected]

Abstract

Innovation contests represent a novel and popular approach for organizations to leverage the creativity of the crowd for organizational innovations. In this approach, ideators present their initial ideas to a global community of potential users, and solicit their feedback for idea improvement or refinement. However, it is not clear which types of feedback lead to the development of better ideas and which contingent factors moderate these relationships. In this study, we examine the role of community feedback on idea development in online innovation contests, by using feedback intervention theory to develop a set of hypotheses relating community feedback and idea quality, and then testing those hypotheses using data from ZEISS VR ONE innovation contest. Our analysis suggest that task information feedback does lead to improvement in idea quality, while task learning and task motivation feedback does not, and the number of users providing feedback moderate the relationship between feedback and idea quality. Implications of our findings for theory and practice are discussed.

1.   Introduction Innovation contests are web-based competitions

where innovators present interesting design solutions for business challenges on the Internet, and a global community of users respond by voting on these ideas, suggesting opportunities for improvement or implementation, and in general, contributing their collective input on idea design and development [1]. Innovation contests are becoming popular as organizations seek new ways to leverage the collective creativity of their customers, employees, and user

1

https://www.press.bmwgroup.com/deutschland/article/detail/T0022473DE/

communities (the “crowd”) to generate new ideas for product and service innovations [2]. Indeed these contests represent a popular form of crowdsourcing (of ideas) in the emergent discipline of crowd science [3].

Innovation contests are not entirely a new phenomena. For example, BMW uses Motorrad Innovation Contest1 for soliciting customer input in designing parts and accessories for BMW’s Ducati and Triumph lines of motorcycles, which led to twin “Boxer” engines and many other designs. Starbucks employs the MyStarbucksIdea portal to source product and service ideas from customers, which generated over 150,000 ideas and implementation of 277 new innovations between 2008 and 2013, including skinny beverages, new flavors like Hazelnut Macchiato, and splash sticks for protecting clothes from coffee spills [4]. IBM uses Innovation Jam to brainstorm ideas for new technologies, in which 150,000 employees, partners, and clients generated more than 46,000 ideas within 72 hours in 2006, and led to ideas such as real-time speech translations in multiple languages and three-dimensional online product demonstrations [5]. The popularity of innovation contests have also led to the development of many third-party websites, such as Innocentive, Freelancer, and 99designs, which host innovation challenges for individuals or organizations where individuals participate with the prospect of winning monetary or non-monetary rewards.

The growing prominence of online contests and crowdsourced innovations reflects innovation as a social collective process initiated and driven by users, in contrast to the traditional research and development process involving specialized staff and specialized laboratories [6]. Online innovation contest websites provide specific design features that allow idea owners or “ideators” (the person generating the idea) to present their initial ideas and community members (lead users,

4334

Proceedings of the 50th Hawaii International Conference on System Sciences | 2017

URI: http://hdl.handle.net/10125/41686ISBN: 978-0-9981331-0-2CC-BY-NC-ND

Page 2: The More the Merrier? The Effects of Community Feedback on ...

community moderators, etc.) to provide feedback and ratings to refine those ideas or evaluate the best ideas for implementation [7, 8]. Community feedback may take the form of constructive criticism, suggestions, requesting further details, or simply encouragement [9, 10]. However, the role of feedback during idea generation is ambiguous, with studies reporting both positive [9, 11, 12] and negative effects [13, 14] on idea quality. Some studies have differentiated between different types of feedback [14] while others have not [15]. Furthermore, the role of contingent factors that may moderate the relationship between community feedback and idea quality remains unexplored.

The aim of this research is to examine which types of community feedback lead to the development of good ideas in online innovation contests, and what contingent factors moderate this relationship. The specific research question of interest to this paper is: What is the role of community feedback on idea development in online innovation contests? To address this question, we draw upon feedback intervention theory [16] to postulate four hypotheses relating feedback type, number of users providing feedback, and idea quality. These hypotheses are tested using secondary data of community feedback on 113 ideas from ZEISS VR ONE innovation contest.

2.   Feedback Intervention Theory and Hypotheses

Development Related research shows that the majority of ideas

generated are of low quality [17, 18]. However, these ideas can be improved using feedback from others [19].

Feedback Intervention Theory (FIT) [16] suggests that different types of feedback trigger different types of cognitive processes on creative tasks such as idea development, which in turn affect task outcomes. Feedback describes information intended to “confirm, add to, overwrite, tune, or restructure information in memory, whether that information is domain knowledge, meta-cognitive knowledge, beliefs about self and task, or cognitive tactics and strategies” [20, p. 5740]. Past studies investigated different types of feedback, such as social feedback [15], community feedback [13], directed feedback, random feedback [14], corrective feedback [21], cognitive feedback [22], and process feedback [23], and their effects on performance outcomes such as idea quality [12] or affective outcomes such as satisfaction [13].

A comprehensive meta-analysis by Kluger and DeNisi [16] shows that the effects of feedback on outcomes are highly volatile. This study report that feedback affects task performance when (a) there exists a feedback-standard gap, which (b) draws attention of the feedback target to the gap and (c) the target deploys the necessary cognitive resources, enacted through

cognitive or affective processes, to address this gap. A feedback-standard gap describes a discrepancy of expected goal accomplishment and currently perceived goal accomplishment delivered through the feedback message [16]. A feedback message can close feedback-standard gaps by triggering affective processes such as increased motivation and engagement or cognitive processes such as restructured understanding [24]. However, affective and cognitive processes are only enacted when the respective feedback-standard gaps receive attention. Human attention is limited and therefore not all feedback-standard gaps can be acted upon. Those feedback-standard gaps that get acted upon result in improved task performance.

In online contests, an ideator may refine an idea based on feedback received from a user because the refinement can move the ideator closer to the standard or the goal of the idea contest. According to FIT, an ideator takes action if the feedbacks trigger affective or cognitive processes.

FIT distinguished between two types of feedback: task-motivation and task-learning feedback. Feedback in the form of praise (e.g., ”that’s a great idea”) and sometimes even destructive criticism or normative cues has been associated to trigger affective processes [16]. This kind of feedback are called task-motivation feedback, because they drive motivation and engagement. Feedback in the form of corrective or improvement suggestions (e.g., “another interesting solution can be mapping out a hypermarket or shopping mall”) are associated with cognitive processes aimed at learning [16]. This type of feedback can be called task-learning feedback, because they facilitate the acquisition of new knowledge.

2.1.   Feedback types and their relation to idea

quality Feedback can have facilitating or hindering effects

on task performance. It can negatively affect performance if the feedback draws too much attention to the self and depletes cognitive resources that can be otherwise used for task performance [16]. This is the case when feedback is directed at the behaviors and attributes of the person [13], rather than the idea. Such feedback conveys self-relevant information, which follows different processing than task-related information [25]. Task-motivation feedback that is directed at the task, rather than the self, reinforces task outcomes [15]. In online innovation contests, most community members do not know each other and have little knowledge of each other’s personal attributes, such as gender, age, skills, or abilities. Hence, their feedback is more likely to be directed at the task rather than the person, and this task-motivation feedback is likely to

4335

Page 3: The More the Merrier? The Effects of Community Feedback on ...

direct attention to the task. This should motivate and encourage idea owners to utilize their cognitive resources to keep developing the idea as best as possible. Therefore, we hypothesize:

H1: Task-motivation feedback is positively

associated with idea quality. Task-learning feedback triggers cognitive processes

that restructure one’s understanding [24]. Learning occurs when people assimilate information, i.e. add new information to their existing cognitive schema, or accommodate their prior knowledge to new information, i.e., they re-arrange, re-organize, or re-define their existing knowledge [26]. For learning to occur, feedback must add new perspectives [7], which is likely in online innovation contests that attract people of different types who are often not experts in the given domain. Such feedback from a diverse audience base are likely to transform previous knowledge structures to contribute to a new and improved understanding of things [26]. Consequently, when ideators receive learning feedback that can improve their knowledge, their ideas should also improve. We therefore hypothesize:

H2: Task-learning feedback is positively associated

with idea quality. 2.2.   Extensions to feedback intervention theory

In addition to task-motivation feedback and task-

learning feedback, other research has investigated the role of task information feedback that provides details about the task environment [27] with cues aimed at clarification [28]. This type of feedback aims at exchanging existing information between feedback provider and receiver (e.g., “Do you already have an idea which software to use for developing the application?”) rather than suggesting new perspectives or providing encouragement. We refer to this type of feedback as task-information feedback. Even when the community issues task-information feedback with generic informational-seeking messages (e.g., ”Please explain”), such feedback tend to affect ideas quality positively [1] by helping the ideator understand where their ideas require clarification. Addressing those questions as part of idea refinement will be based on new or improved self-understanding of the idea rather than refining the idea refined for enhancements or refinements. Therefore, we suggest:

H3: Task-information feedback is positively

associated with idea quality.

While feedback can improve idea quality, this effect is likely to increase with the number of users providing such feedback. Online innovation contests tend to attract hundreds of people with different cultural backgrounds, abilities and skills, and it is likely that many of them may have suggestions on how to improve the original idea, how to clarify existing ideas, or simply words of encouragement. Past research has emphasized that access to many different perspectives allows users to think divergently and come up with more creative ideas [1]. Ideators seek input from diverse others and wish to see their problem from diverse perspectives [29], and this effect will be greater if the variety of participants providing feedback is high [7]. Therefore, we posit:

H4: The number of unique users moderates the

relationship between idea quality and task-motivation feedback (H4a), task-learning feedback (H4b), and task-information feedback (H4c).

3.   Methods

To empirically test the above hypotheses, we use

publicly available data from the ZEISS VR ONE open innovation contest (https://vronecontest.zeiss.com). This contest was held from December 2, 2014 to February 16, 2015 hosted on the innovation platform of HYVE AG (https://www.hyvecommunity.net). The challenge was to contribute innovative ideas for virtual reality apps or completed apps for the VR ONE headset. We only considered evaluated ideas submitted to the category “innovative ideas for apps” since ideas for completed apps received hardly any feedback. The top two ideas were awarded non-monetary prizes, such as an iPhone 6 and the ZEISS VR ONE headset. Examples of ideas submitted in this category included virtual reality applications to simulate how different animals see the world from their own vantage point, how to tailor a dress, and how to redesign a room before actual refurnishing. During the time-frame of our study, 482 ideas were submitted and 149 ideas received feedback and were included in our observations.

Registered community members could post their ideas on the HYVE’s innovation platform, provide comments on others’ ideas (see Figure 1), as well as like, rate, and bookmark these ideas. The system informed participants when new comments were posted on their ideas, when their ideas were bookmarked, or when new responses were posted to previous comments. Participants could use private walls to post their ideas privately and invite certain people to check out their ideas. Participants’ activity stream documented all of their submitted ideas with title, excerpt of description, number of comments, number of likes, and average idea quality rating (see Figure 2). Users could browse

4336

Page 4: The More the Merrier? The Effects of Community Feedback on ...

through ideas using a drop-down menu to customize the activity stream, for example, to see the most-commented, the most liked, or the best rated ideas first.

3.1.   Measures and operationalization

The dependent variable “idea quality” was measured

in a binary manner as good (1) or not good (0), as assessed jointly by a team six HYVE employees who have worked with numerous such ideas in the past. These employees rated each idea in terms of its creativity, feasibility, originality, and market-potential.

Figure 1: Example of a discussion thread

Figure 2: Activity stream

Following several rounds of discussion, the team selected twenty ideas for further consideration. These twenty ideas represent “good ideas” and the remaining ideas are designated as “not good”

A comment was considered a feedback message, when it was provided by a user other than the ideator. We had a total set of 264 such feedback messages, which we coded into one or more of the three categories of task-motivation, task-learning, and task-information feedback messages. Each code was binary (1=yes, 0=no). Table 1 provides examples of feedback messages and their coding. These categories were non-exclusive in that the overall feedback message could simultaneously belong to multiple feedback categories. Coding was done by the first author. However, to assess coding accuracy and quality, at the start of the coding process, two authors independently coded a random sample of 10% of the feedback messages and reached an Cohen’s Kappa [30] for intercoder reliability of 84%. Coding differences between coders were analyzed and resolved by consensus.

Unique users describes the number of distinct users providing feedback to an idea [7]. This construct was measured as a count of the number of distinct users that contributed feedback to a given idea. This count excluded multiple (e.g., follow-up) entries from the same person and messages generated by the ideator.

We also controlled for the number of days the idea was on the platform and the number of other ideas from the ideator, number of views, likes and comment.

Table 1: Codes and Descriptions

Feedback type

Description and example

Task-motivation

A feedback message that shows solidary/antagonism, tension/tension release, (dis)agreement, or praise. Example: “Great topic!”, “Very nice idea!”

Task learning

A feedback message that suggests enhancement, modification, or new perspectives to the existing idea. Example: “Still images won't give you the feeling of traveling to past days. That's why this idea will need 3D models of places from past days.”, “You can contact some art galleries and present them your product, thus allowing visitors to admire 2 galleries while paying for one”

Task-information

A feedback message that requires clarification of the task environment or implementation as well as information exchange about similar tasks

4337

Page 5: The More the Merrier? The Effects of Community Feedback on ...

Example: “How do you record the videos?”, “how would you know if the user is walking in a direction?”, “A plane cannot be landed by anyone, not even with AR.”

3.2.   Data analysis

Given the binary nature of our dependent variable

(idea quality), we employed a linear probability model (LPM) to estimate the magnitude of direct and moderated effects postulated in our hypotheses. The LPM provides an easy way to incorporate and interpret interaction effects and the obtained estimates have good asymptotic properties [31]. All variables in the regression model were standardized to account for differing scales and avoid inflation in multicolinearity. Correlation analysis (see Table 2) indicate some high bivariate correlations (>0.7), suggesting potential multicollinearity [32]. We examined the variance inflation factor (VIF) to determine if any of the variables should be dropped from our analysis due to multicollinearity. None of variables exceeded the recommended threshold of 5, and therefore, all variables

were retained for further analysis [33]. We used IBM SPSS and Matlab for this analyses.

4.   Results

The results of the LPM analysis are shown in Table

3. We performed a series of nested model analysis to evaluate progressively the incremental contribution of the control variables, the direct effects using three models. In Model 1, the control variables number of other ideas, days on platform, number of likes, and number of views were not significant, but the number of comments was significant. This is hardly surprising as number of comments is also a proxy for the level of attention to an idea. In Model 2, the standardized independent variables were added, and in Model 3, the standardized interaction terms were added. Adjusted R-square progressively increased from Model 1 to 2 to 3, providing evidence that adding independent and moderator variables increased explanation on the dependent variable. This is also supported by F-tests of model comparison. Overall, the results supported our argument that user feedback improved idea quality and different types of feedback have differential effects on idea quality.

Table 2: Descriptive statistics and correlations

Mean S.D IQ MF LF TF Idea quality 0.14 0.35 Task-motivational feedback 1.18 1.25 0.32* Task-learning feedback 0.42 0.74 0.24* 0.38* Task-information feedback 1.24 1.29 0.39* 0.59* 0.40* Unique users 1.38 1.41 0.35* 0.82* 0.48* 0.72* IQ = idea quality, MF = motivational feedback, LF = learning feedback, TF = task-information feedback, UU = unique users. Additional Controls are omitted for readability. Extended table available upon request.

Table 3: Linear Probability Model results: coefficient (standard error)

Idea Quality Model 1: control

variable only Model 2: main

effects Model 3:

interaction effects Intercept -0. 010 (0.078) 0.014 (0.078) 0.014 (0.031) Control variables

Days on platform -0.001 (0.001) -0.036 (0.027) -0.001 (0.001) Number of other ideas 0.006 (0.003) 0.050 (0.027) 0.041 (0.003) Views 0.001 (0.001) 0.002 (0.001) 0.002 (0.001) Likes -0.004 (0.015) -0.010 (0.017) -0.008 (0.017) Comments 0.028 (0.012)* -0.000 (0.020) -0.000 (0.020)

Independent Variables Task-motivation feedback 0.026 (0.038) 0.154 (0.061)* Task-learning feedback 0.023 (0.041) 0.135 (0.091) Task-information feedback 0.078 (0.033)* -0.054 (0.058) Unique users 0.000 (0.048) 0.010 (0.054)

Interaction effects

4338

Page 6: The More the Merrier? The Effects of Community Feedback on ...

Task-motivational feedback * unique users -0.047 (0.018)* Task-learning feedback * unique users -0.047 (0.032) Task-information feedback * unique users 0.040 (0.028)*

Adjusted R-Squared 0.141 0.162 0.206 N 149 149 149 Model 1: F-score = 5.87, p<.001 with df = 5. Model 2: F-score = 4.19, p<.001 with df = 9. Model 3: F-score = 5.254, p<.001 with df = 12. Significance levels: * < 0.05 ** <0.01

Hypotheses 1-3 suggested a positive association

between the three feedback types and idea quality. Results of the main effects in Model 2 indicate a significant positive main effect for task-information feedback (β=0.078, p=0.02), but no significant effect found for motivational feedback or learning feedback. Therefore, H1 and H2 were not supported, but H3 was supported.

Hypothesis 4 examined the potential moderating effect of the number of unique users on the relationship between the three types of feedback and ideas quality. We found no interaction effect between task learning feedback and the number of unique users, failing to support H4b. However, the interaction effects were significant for task-motivational feedback (H4a) and task-information feedback (H4c).

To visually examine Hypotheses H4a and H4c, we further investigated the moderating effects of unique users by means of total effect size plots (see Figure 3) and interaction plots (see Figure 4). The total effect sizes in Figure 3 depict the magnitude of the effect that the predictor variables have on the response variable in the linear model when these variables are increased from their minimum to maximum amount. This allows

a direct comparison of the effects of predictors regardless of their underlying scale. Each effect is shown as a circle, and the horizontal bar indicates the confidence interval for the estimated effect. The effects plot “Panel A” shows the estimated effects of the response variables not considering interaction effects. It can be seen that task-information feedback has a main effect of about 0.5 compared to the other feedback types with an effect magnitude of below 0.2. This suggests that by increasing ideas with task feedback (from 0 to 7), we can generate, in the counterfactual, a 60% increase in probability that the idea is a good idea. All feedback types have a positive main effect. However, the picture of the magnitude of effect changes when the moderating effects of number of unique users are considered. In this case, task-motivation and task-learning feedback have more prevalent positive effects than task-information feedback. These are described in Figure 3, Panel B. For instance, the same counterfactual exercise for task feedback now delivers a more modest and non-significant, increase of 8%: this is due to the negative effect provided by the interaction with the number of users term (as also shown in Table 3).

4339

Page 7: The More the Merrier? The Effects of Community Feedback on ...

Figure 3: Effect Sizes of models with and without interaction effects

Figure 4: Interaction effect for different types of feedback

We further elaborate the role of the moderating effects by considering interaction plots in Figure 4. These plots provide a more detailed picture on the direction of the effect depending on the number of unique users (x-axis) for the minimum, maximum, and average number of feedback messages per feedback type. This visualization is particularly helpful for understanding when sign-reversals occur, which are

indicative of potential trade-offs between the number of users providing feedback and the effectiveness of the feedback towards idea quality. For example, consider Figure 4, Panel A: a counterfactual exercise where we vary both the number of users and the number task-motivation feedbacks. We see that, while increasing the number of users, more motivational feedback is eventually detrimental. With more than three users, the

4340

Page 8: The More the Merrier? The Effects of Community Feedback on ...

interaction effect cancels out the previously positive main effect of task motivation. Consequently, the probability that an idea is a good idea decreases by 20% when more than 6 unique users provided motivational feedback messages. A similar interpretation is evident for task-learning feedback, where again more learning feedback and the existence of more than three users lead to a negative effect. Also note here that the interaction effect becomes negative with more than three unique users. Only for task-information feedback, it appears that more feedback and an increased number of unique users have a positive effect on idea quality. These additional analyses suggest that Hypotheses H4a and H4c can be supported but only for ideas where less than four unique users provided feedback.Discussion and limitations

This study aimed at gaining a better understanding

of how different types of feedback and the number of unique users affect idea quality in an innovation contest. Our findings revealed that task-motivation and task-learning feedback have different effects than task-information feedback. We also showed that the number of feedback-providing unique users plays an important moderating role and ignoring this role may lead to the wrong conclusions about the effects of feedback types. On a general note, it is interesting to see that in an innovation contest with more than 500 active users, only a small number of feedback providers is required to drive good ideas. In fact, our results indicate that a small team of four users (one ideator and three feedback providers) can produce a good idea. From past research on small teams, we know that interaction behavior differs considerably depending on group size [34]. Where bigger groups influence decisions more through dominant behavior, ideators in smaller groups base their decisions more on people they interact with [35]. It appears that similar patterns of interaction behaviors exist in online innovation contests as well.

It is not too surprising that task-learning feedback, which consists of suggestions for enhancements, modifications, and new perspectives, was not a significant determinant for good ideas. Recent findings from an online co-creating study can provide some explanation [13]. These findings showed that feedback may lower the uniqueness of ideas as ideas shift after refinement from their extreme position towards the mainstream. If this holds true also for innovation contests, it could be that task-learning feedback might have helped refine the idea while trading off its unique value. Moreover, learning new knowledge is challenging for ideators, because people, in general, tend to experience difficulties in interpreting the consequences and implications of new alternatives that are outside their domain of expertise, are not able to

recognize the relevance of new information, and may lack the cognitive frames upon which these new alternatives could be explored [36]. Also here, we see that “too many cooks spoil the broth” meaning that just attracting more user feedback to an idea is not a guarantee that it will become a good idea.

4.1.   Implications

Our findings have implications for research, practice, and technology design. This research contributes to Feedback Intervention Theory [16] by providing a refined conceptualization and operationalization of a new type of feedback (task-information feedback) and a new moderator (number of distinct users). While task-motivation feedback and task-learning feedback represent the affective and cognitive processes by which feedback improves task outcomes, we demonstrate the salience and importance of task information feedback especially in the context of creative work such as idea development. Moreover, we show that feedback type does not always help improve idea quality and that this effect is contingent on the number of unique users in the user community.

Our findings can also inform the training of moderators responsible for driving discussions and motivating idea refinement in innovation contests. Moderators should refrain from providing more task-motivation and task-learning feedback when an idea already received a lot of appraisal (task-motivation) and new suggestions for improvements (task-learning) by many other users. This is clearly difficult to estimate for moderators, as they cannot predict how many new users will contribute feedback after they did. According to our findings, moderators should therefore adopt a humble facilitator role at the beginning of the contest, characterized with behavior that is aimed at clarification and information exchange [37], which can have a positive effect on idea quality even if the number of unique users increases.

The moderating effect of the number of unique users has also implications for the design of contest platforms. Our findings suggest that “the more the merrier” does not necessarily apply to users providing feedback for good ideas. Technology could therefore be designed to help managing the number of unique and active users. Many contest platforms help users to find interesting ideas by providing functionality that shows the most commented ideas. We believe that this is one reason why a few ideas received many comments and most ideas receive few to no comments. Algorithms that infuse ideas with no feedback into the result page and rank down those ideas with more than three unique users could help to get the most out of feedback.

4341

Page 9: The More the Merrier? The Effects of Community Feedback on ...

4.2.   Limitations and directions for future work There are many possible directions for future research, drawing from the two broad categories of limitations in this study: the nature of community members and the evolution of feedback.

For this study, we relied on community feedback data with no information on ideators. An implicit selection biases may exist with users contributing to submitting or developing ideas due to their interest in virtual reality. This may potentially limit the generalizability of the findings beyond the environment presented in this work. Furthermore, not everyone behaves the same way when receiving feedback. The ability to understand the feedback message and act upon it is one important indicator [38] whether or not feedback gets assimilated leading to improved idea descriptions. This information could be useful in identifying a richer pattern of heterogeneous feedback (even as the presence of other unmeasured confounders could bias the size the moderation effects) and in recognizing polarization effects due to negative or positive feedback that are known to be prevalent in online communities. Future research could therefore control for effects that exist due to ideators’ ability to process the content of the feedback message.

Moreover, we studied the effects of the types of feedback on idea quality as a black box. In order to better understand feedback effects, a more in-depth examination of the idea generation process that leads to outcomes is required [39]. It might be that the timing of feedback or the status of feedback providers might play an important role. In addition, we conceptualized each type of feedback as an independent variable, not considering any interaction effects that may exist among feedback types. A well-designed laboratory experiment might help disentangle such potential effects and provide useful insights into the dynamic effects of feedback. We hope that our study sparks more research in the field in order to test for causality and assess the role of both moderators and mediators such as intermediate outcomes.

Acknowledgements

The research leading to the presented results was

partially funded by the Austrian Science Foundation (Erwin-Schrödinger Scholarship of the FWF Project Nr. J3735-G27)

References [1] A. C. Bullinger, A. K. Neyer, M. Rass, and K. M.

Moeslein, "Community-­‐ based innovation contests: where competition meets cooperation," Creativity and

innovation management, vol. 19 (3), 2010, pp. 290-303.

[2] A. King and K. R. Lakhani, "Using open innovation to identify the best ideas," MIT Sloan Management Review, vol. 55 (1), 2013, p. 41.

[3] J. Prpic and P. Shukla, "Crowd Science: Measurements, Models, and Methods," in 2016 49th Hawaii International Conference on System Sciences (HICSS), 2016, pp. 4365-4374.

[4] B. Wire. (2013, April 13). Starbucks Celebrates Five-Year Anniversary of My Starbucks Idea. Available: http://www.businesswire.com/news/home/20130328006372/en/Starbucks-Celebrates-Five-Year-Anniversary-Starbucks-Idea

[5] O. M. Bjelland and R. C. Wood, "An inside view of IBM's' Innovation Jam'," MIT Sloan management review, vol. 50 (1), 2008, p. 32.

[6] K. J. Boudreau and K. R. Lakhani, "Using the crowd as an innovation partner," Harvard Business Review, vol. 91 (4), 2013, pp. 60-69.

[7] A. Armisen and A. Majchrzak, "Tapping the innovative business potential of innovation contests," Business Horizons, vol. 58 (4), 2015, pp. 389-399.

[8] A. Majchrzak and A. Malhotra, "Towards an information systems perspective and research agenda on crowdsourcing for innovation," The Journal of Strategic Information Systems, vol. 22 (4), 2013, pp. 257-268.

[9] J. Füller, K. Hutter, J. Hautz, and K. Matzler, "User Roles and Contributions in Innovation-Contest Communities," Journal of Management Information Systems, vol. 31 (1), 2014, pp. 273-308.

[10] C. Van Delden, "Crowdsourced Innovation-Revolutionizing Open Innovation with Crowdsourcing," ed: innosabi Publishing, Munich. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission, 2014.

[11] S. Adamczyk, J. Haller, A. C. Bullinger, and K. M. Möslein, "Knowing is Silver, Listening is Gold: On the importance and impact of feedback in IT-based innovation contests," in Wirtschaftsinformatik, 2011, p. 97.

[12] R. E. Potter and P. Balthazard, "The Role of Individual Memory and Attention Processes during Electronic Brainstorming," MIS Quarterly, vol. 28 (4), 2004, pp. 621-643.

[13] C. Hildebrand, G. Häubl, A. Herrmann, and J. R. Landwehr, "When social media can be bad for you: Community feedback stifles consumer creativity and reduces satisfaction with self-designed products," Information Systems Research, vol. 24 (1), 2013, pp. 14-29.

[14] J. O. Wooten and K. T. Ulrich, "Idea generation and the role of feedback: Evidence from field experiments with innovation tournaments," Available at SSRN 1838733, 2014.

4342

Page 10: The More the Merrier? The Effects of Community Feedback on ...

[15] J. Y. Moon and L. S. Sproull, "The role of feedback in managing the Internet-based volunteer work force," Information Systems Research, vol. 19 (4), 2008, pp. 494-515.

[16] A. N. Kluger and A. DeNisi, "The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory," Psychological bulletin, vol. 119 (2), 1996, p. 254.

[17] K. Girotra, C. Terwiesch, and K. T. Ulrich, "Idea generation and the quality of the best idea," Management Science, vol. 56 (4), 2010, pp. 591-605.

[18] B. A. Reinig, R. O. Briggs, and J. F. Nunamaker, "On the measurement of ideation quality," Journal of Management Information Systems, vol. 23 (4), 2007, pp. 143-161.

[19] J. Lorenz, H. Rauhut, F. Schweitzer, and D. Helbing, "How social influence can undermine the wisdom of crowd effect," Proceedings of the National Academy of Sciences, vol. 108 (22), 2011, pp. 9020-9025.

[20] P. Winne and D. Butler, "Student cognition in learning from teaching," International encyclopedia of education, vol. 2, 1994, pp. 5738-5775.

[21] S.-W. Yeh and J.-J. Lo, "Using online annotations to support error correction and corrective feedback," Computers & Education, vol. 52 (4), 2009, pp. 882-892.

[22] K. Sengupta and T. K. Abdel-Hamid, "Alternative conceptions of feedback in dynamic decision environments: an experimental investigation," Management Science, vol. 39 (4), 1993, pp. 411-428.

[23] S. Geister, U. Konradt, and G. Hertel, "Effects of process feedback on motivation, satisfaction, and performance in virtual teams," Small group research, vol. 37 (5), 2006, pp. 459-489.

[24] J. Hattie and H. Timperley, "The power of feedback," Review of educational research, vol. 77 (1), 2007, pp. 81-112.

[25] T. T. Kircher, C. Senior, M. L. Phillips, P. J. Benson, E. T. Bullmore, M. Brammer, et al., "Towards a functional neuroanatomy of self processing: effects of faces and words," Cognitive Brain Research, vol. 10 (1), 2000, pp. 133-144.

[26] J. Kimmerle, U. Cress, and C. Held, "The interplay between individual and collective knowledge: technologies for organisational learning and knowledge building," Knowledge Management Research & Practice, vol. 8 (1), 2010, pp. 33-44.

[27] W. K. Balzer, L. M. Sulsky, L. B. Hammer, and K. E. Sumner, "Task information, cognitive information, or functional validity information: which components of cognitive feedback affect performance?," Organizational behavior and human decision processes, vol. 53 (1), 1992, pp. 35-54.

[28] B. Tan, K.-K. Wei, and J. Lee-Partridge, "Effects of facilitation and leadership on meeting outcomes in a group support system environment," European

Journal of Information Systems, vol. 8 (4), 1999, pp. 233-246.

[29] S. Hasan and R. Koning, "Conversational Peers and Idea Generation: Evidence from a Field Experiment," under submission.

[30] J. Cohen, "A Coefficient of Agreement for Nominal Scales," Educational and Psychological Measurement, vol. 20 (1), 1960, pp. 37-46.

[31] J. Wooldridge, "Binary Dependent Variables: The Linear Probability Model," in Introductory Econometrics: A Modern Approach. vol. 5th edition, J. Wooldridge, Ed., ed: MIT Press, 2013, pp. 248-253.

[32] C. F. Mela and P. K. Kopalle, "The impact of collinearity on regression analysis: the asymmetric effect of negative and positive correlations," Applied Economics, vol. 34 (6), 2002, pp. 667-677.

[33] P. A. Rogerson, Statistical methods for geography: a student's guide: Sage Publications, 2010.

[34] P. D. Reynolds, "Comment on" The Distribution of Participation in Group Discussions" as Related to Group Size," American Sociological Review, 1971, pp. 704-706.

[35] N. Fay, S. Garrod, and J. Carletta, "Group discussion as interactive dialogue or as serial monologue: The influence of group size," Psychological Science, vol. 11 (6), 2000, pp. 481-486.

[36] A. Afuah and C. L. Tucci, "Crowdsourcing as a solution to distant search," Academy of Management Review, vol. 37 (3), 2012, pp. 355-375.

[37] V. K. Clawson, R. P. Bostrom, and R. Anson, "The Role of the Facilitator in Computer-Supported Meetings," Small Group Research, vol. 24 (4), 1993, pp. 547-565.

[38] R. E. Petty and J. T. Cacioppo, The elaboration likelihood model of persuasion: Springer, 1986.

[39] N. H. Lurie and J. M. Swaminathan, "Is timely information always better? The effect of feedback frequency on decision making," Organizational Behavior and Human Decision Processes, vol. 108 (2), 2009, pp. 315-329.

4343