DOI: htps://doi.org/10.7341/20181445 JEL codes: O30, M10, M31/

Received 4 July 2017; Revised 11 December 2017, 25 October 2018, 31 October 2018; Accepted 2 November 2018

Regina Lenart-Gansiniec , Ph.D., Jagiellonian University, 30-348 Krakow, ul. Łojasiewicza 4, Poland, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Abstract

Crowdsourcing is a relatively new concept and, despite the interest of researchers, still little is known about it. At the same time, one observes difficulties of a cognitive and practical nature. This has become a premise for a reflection on the methodology of research on this subject. The subject of the article is the identification of the existing procedures of studying crowdsourcing, with particular inclusion of the methodological challenges that researchers of this concept may face. The article was written based on a systematic literature review. Its results enabled the formulation of some methodological guidelines for further research. Research should be conducted taking into account three levels of crowdsourcing: organization, technology, and participant. In addition, a quantitative and qualitative approach will enable the expansion of knowledge about crowdsourcing.

Keywords: crowdsourcing, methodology, research procedure, research methods

INTRODUCTION

The notion of crowdsourcing appeared in the subject literature for the first time in 2006 owing to Howe. At first, crowdsourcing gained popularity in management sciences due to its potential in the scope of innovative problem solving (Afuah & Tucci, 2013). In the next years, researchers saw its benefits related to, inter alia: developing business processes, creating open innovations (Burger-Helmchen & Pénin, 2010), building competitive advantage (Leimeister & Zogaj, 2013), and accessing experience, innovativeness, and information (Aitamurto, Leiponen, & Tee, 2011). Crowdsourcing also enables crisis management, expands an organization’s existing activity, creates an organization’s image, improves communication with the environment, and optimizes the costs of an organization’s activity. For these reasons it has become a megatrend in economic practice – more and more organizations reach for it taking into account just the potential business value alone (Leimeister, Huber, Bretschneider, & Krcmar, 2009).

Currently, one may observe a gradual interest in crowdsourcing, both of scientists and practitioners (West, Salter, Vanhaverbeke, & Chesbrough, 2014). It is possible to indicate many reasons for the growing attention to this process, which argue for the currently observed, and postulated in the future, intensification of the discourse on crowdsourcing. Brabham (2008) formulated an ascertainment that crowdsourcing is a new, exciting direction for research, which is of great importance to the whole organization (Colombo, Buganza, Klanner, & Roiser, 2013). It is considered in the literature as a rising phenomenon based on Web 2.0, which draws the attention of both practitioners and scientists. It is the possibilities and benefits that come from using crowdsourcing, which constitute the source of its popularity. There is even a conviction that crowdsourcing in the next years will be a dynamic and active area of research (Zhao & Zhu, 2014). Moreover, crowdsourcing is beginning to play a very important role in a number of fields (Tapscott & Williams, 2007). One observes the growing importance of these problematic issues in medical sciences (Callaghan, 2014), technical sciences (Halder, 2014), and management sciences.

Despite the importance and establishment of crowdsourcing in management sciences, it seems that it has not yet seen comprehensive and cross-case analyses. The existing scientific output is mediocre and its nature is mainly conceptual. Therefore, one may ascertain that the research field under consideration is in a phase of early growth – it also concerns the methodology of research. Crowdsourcing may be considered a highly topical area of consideration.

The article aims to identify the existing procedures of studying crowdsourcing, with particular inclusion of the methodological challenges, which the researchers of this concept may face. In addition, based on other researchers’ recommendations, methodological guidelines for further research were formulated. The article was written based on a systematic literature review. The biggest, full-text databases, i.e. Ebsco, Elsevier/Springer, Emerald, Proquest, Scopus, and ISI Web of Science, which include the majority of journals on strategic management were analyzed. In order to establish the state of knowledge and existing findings, a review of databases in Poland, BazEkon and CEON, was also conducted. 54 elaborations of English language databases and 41 from Polish language databases, from the period 2006-2017, were analyzed.

LITERATURE REVIEW

The concept of crowdsourcing was introduced into economic literature by J. Howe, the editor of Wired magazine, in June 2006. In his article entitled “The Rise of Crowdsourcing,” he describes various organizations making use of the Internet to establish cooperation with customers and engaging them in creating innovations. The definition of crowdsourcing proposed by Howe, after consulting with his editorial colleague M. Robinson, appeared one month after the article was presented in a blog run by the editor (www.crowdsourcing.com). He defined crowdsourcing in the so-called White Book as the “act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively) but is also often undertaken by sole individuals” (Howe, 2006). The second definition proposed in the so-called “Soundbyte” considers crowdsourcing as the “application of Open Source principles to fields outside of software” (Howe, 2008). The author considered crowdsourcing as a tool or a way that helps organizations acquire free or inexpensive workforce.

Howe points out, that crowdsourcing is a notion which owes its beginning to Surowiecki. Howe exposes in crowdsourcing the importance of the crowd and the forces that activate it to take action. He assumes that the crowd is distinguished by wisdom and each of its members possesses knowledge or skills which may become valuable to someone. The basis here is collective intelligence and cooperation, which may contribute to creating values, choosing the best solutions, gathering opinions, and formulating judgments.

A continuator of Howe’s concept is Brabham. He proposed the first definition following numerous publications in the years 2008-2012, in his book entitled “Crowdsourcing” in 2013. According to Brabham’s, crowdsourcing is not “just old wine in new bottles.” The author gives examples of open calls for solving difficult problems: creating the Oxford English language dictionary in 1800 by means of open discussions and the Alkali prize for developing an alkali method founded in 1775 by Louis XVI. In Brabham’s opinion, they are not examples of crowdsourcing since it is only present when the organization has a task to be performed, whereas the online community carries it out voluntarily. A result of these actions there are mutual benefits for both parties. For Brabham, crowdsourcing is an Internet-dispersed model for solving problems and production, a tool for social participation, a planner for governments, and a method of building and processing a significant number of shared resources.

With time new definitions started to appear, which considered the role of the Internet as a characteristic moderator (Quinn & Bederson, 2011). It had become linked with establishing collaboration and relations with virtual communities (Yang, Adamic & Ackerman, 2008), by making use of their wisdom (Surowiecki, 2004) to solve problems (Vukovic, Mariana, & Laredo, 2009), and creating innovative solutions (Sloane, 2011) and open-source software. In crowdsourcing, it is the wisdom and collective intelligence that gain importance. The crowd becomes wise, rational, kind, and useful (Gloor & Cooper, 2007; Wexler, 2011). Most authors acknowledge that a crowd is a general group, usually an undefined, large group of people - an online public (Kleemann, Voß, & Rieder, 2008) which is often called users, consumers, clients, voluntary users, or online communities (Chanal & Caron-Fasa, 2008; Whitla, 2009). It is accepted that the crowd in crowdsourcing constitutes a group of amateurs, composed of students, young graduates, scientists, or organization members (Schenk & Guittard, 2009). Other authors point to network employees (Heer & Bostok, 2010) emphasizing their education and intelligence.

The definitions of crowdsourcing define a particular conceptual framework (Table 1), including common features of crowdsourcing, i.e.: (1) crowd (who forms it?, what is she/he supposed to do?, what does she/he get in return?); (2) initiator (who is it/she/he? , what does it/she/he get in return from the crowd?); (3) process (type of process, way of joining the crowd, way of mediation between the organization and the crowd).

The indicated levels correspond with the fact that crowdsourcing is a complex and multidimensional concept. Nevertheless, there are discrepancies when it comes to their number. Some authors indicate five levels (Leicht, Durward, Blohm, & Leimeister, 2015): organization, intermediary, system, user, and application and evaluation. Whereas Hetmank (2013) also identified four levels, but they were named in a different way: organization, technology, process, and human-centric. Based upon the research by Zhao and Zhu (2014) they combined all the findings of other researchers and indicated three levels of crowdsourcing: organization, participant, and system. The researchers’ findings are reflected in the work of other authors (Vukovic et al., 2009; Zogaj, Bretschneider, & Leimeister, 2014). Moreover, so, the organization level refers to the premises for involvement of the organization in crowdsourcing (Schenk & Guittard, 2009), identification of the critical success factors, involvement consequences (Sims & Crossland, 2010), possible benefits to be achieved (Stol & Fitzgerald, 2014), conditions, and implementation barriers.

Table 1. Crowdsourcing levels

Author/authors

Level

Thematic scope

Analysis level

Babham, Sanchez & Bartholomew, 2009; Chen, 2016; Oomen & Aroyo, 2011;Seltzer & Mahmoudi, 2012; Stiver et al., 2014; Bayus, 2012; Basto, Flavin & Patino, 2010; Dunn & Hedges, 2012; Budhathoki & Haythornthwaite, 2012; Lӧnn & Uppstrӧm, 2013; Sinha, 2008; Crossan & Apaydin, 2010; Schenk & Guittard 2009; Sharma, 2010; Hsueh et al., 2009; Poetz & Schreier, 2009; Wiggins & Crowston, 2011

Organizational

Crowd capital, participational management, managing innovations

Participant (motivation, behavior)

Organization (acceptance, implementation, coordination, management, quality, evaluation, adaptation to the organization’s mission and goals, establishing collaboration with the crowd, effective use, barriers, success factors, quality of acquired information)

Minner, Holleran, Roberts & Conrad, 2015; Green, 2016; Estermann, 2016

Technical

Software, technical functions, user interface, user accreditation, user profiles, search history, mechanisms of payment for ideas

System (incentive, mechanisms, technology, efficiency, technological problems in designing crowdsourcing systems)

Mergel, 2015; Agapie, Teevan & Monroy-Hernandez, 2015; Hudson-Smith et al., 2009; Schwarz, 2016; Cullina, Conboy & Morgan, 2015; Hiltunen, 2011; Bott & Young, 2012; Aitamurto & Landemore, 2013; Byren, 2013

Process/system

Structures, typologies, organizational processes, submitting, distribution, accepting ideas of the crowd, specifying and division of the crowd’s tasks, interactions between the organization and the crowd

Participant (motivation, behaviors)

Organization (coordination, task type)

Chesbrough & Crowther, 2006; Chesbrough, 2011; Huston & Sakkab, 2006

Individual

Employee attitudes

Internal resistance to external knowledge

Gregg, 2010; Leimeister, 2010; Brabham, 2008, 2010; Lakhani et al., 2007; Yang et al., 2008

Virtual communities

Motivation, behaviors

Participant (motivation in different contexts of crowdsourcing, factors impacting motivation)

Organization (the crowd as a partner, division of the virtual community)

A question appears at this point about the initiator of crowdsourcing, namely who it/she/he is? What are the benefits it/she/he obtains? (Burger-Helmchen & Penin, 2010). Assuming the participant’s level means that, owing to cumulating knowledge and skills, the virtual communities are able to solve problems, design new products and services, large aggregate amount of data, or collect funds for a given goal. Crowdsourcing at a system level is a social and technical system which supports interactions and communication between the people and the organization. Apart from the technical or IT aspects, the issue of how the virtual community integrate their ideas with the organization’s specificity (Steiger, Albuquerque, & Zipf, 2012) arises. Others think that understanding the motivational mechanisms of crowdsourcing (Archak & Sundararajan, 2009; DiPalantino & Vojnovic, 2009; Horton & Chilton, 2010; Wilcox, 2000) may contribute to the greater involvement of the virtual communities (Zhao & Zhu, 2012).

RESEARCH METHODS

The review of research on crowdsourcing was conducted based on the results of a systematic literature review. One of the main reasons for using this methodology is the need for a methodological regime, which is essential if we are willing to fulfill the rule of continuity. As opposed to traditional literature reviews, a systematic literature review avoids the dangers stemming out of subjectivism, the lack of a systematic approach, and prejudice. According to its methodology, the entire procedure includes three stages: (1) selecting databases and a collection of publications, (2) selection of the publications and development of a database, (3) bibliometric analysis, contents analysis, and verification of the usefulness of the obtained results for further research.

The first stage constituted a choice of the subject of research. This concerned specifying a collection of publications, which would be analyzed. The basis at this point was selecting the databases. The analysis covered full text, greatest databases which include the majority of journals dealing with strategic management, i.e. Ebsco, Elsevier/Springer, Emerald, Proquest, Scopus, and ISI Web of Science. In order to establish the state of knowledge and existing findings, a review of Polish databases, BazEkon and CEON, was also carried out. They were selected owing to their integrity and completeness. The reason for using several databases simultaneously is due to their diverse range and the gathered resources and sources. The principal issue in defining the collection of publications is the choice of keywords connected with the subject of the research, in order to identify potentially significant scientific articles from the point of view of the analyzed problematic aspects. In each of the databases mentioned above, keywords were used which met the following criteria for inclusion: “crowdsourcing,” “crowdsourcing” in the abstract, title, and keywords. The base of publications obtained in such a way was further analyzed and selected in the next stages. As a result of searching through the chosen databases over 46,000 publications were obtained selected from English language bases and 388 selected from Polish language bases.

The second stage is based on imposing limitations and database selection according to the “snowball” procedure. Therefore, the following limitations were imposed on the identified articles: full text, reviewed publications and the area of management sciences. Publications related to IT, social, technical, mathematical, medical sciences, and humanities were excluded from the collection. Duplicating publications, books, dissertations, and book chapters were eliminated. Articles in their full version, published in journals and the so-called proceedings were included.

The third stage is the basis for identifying the areas for further research exploration, valuable from a cognitive point of view, and important for the development of the theory of management. At this stage, the usefulness of the obtained elaborations for realization of the research aims was verified. Those publications, which did not strictly concern crowdsourcing, but instead treated it as a secondary subject, were discarded. Only those publications, whose leading object of analyses had the term “crowdsourcing” placed in the title and keywords, were deemed important from a research point of view. As a result, a literature base was obtained in the form of 54 publications selected from English language bases and 41 publications selected from Polish language bases. In the next stage, that total of 95 publications were further analyzed using bibliometric techniques, including the frequency, number of publications, and citations. Also at this stage, an analysis of the contents was also carried out, which determined the findings of other researchers and their evaluation, and also organized the research findings. The results of this systematic literature review have been presented in the second part of this article.

ANALYSIS

Interest in the subject of crowdsourcing started in 2006 with the first publication by Howe entitled “The Rise of Crowdsourcing.” Since then crowdsourcing has started to receive the attention of researchers. Most publications still refer to Howe and the continuator of his concept Brabham (2008). One may observe that J. Howe’s publication deserves to be called a seminal study and, therefore, the leading one, constituting an inspiration for further scientific studies (according to Google Scholar the number of daily citations for 30.10.2016 was 3276). The conducted analysis of the number of publications devoted to crowdsourcing enables one to ascertain that this subject enjoys researchers’ interest. The figure for the trend of publications contained in the English language databases is R2=0.668, which proves a growing tendency of the publications. In the case of Polish language databases, the figure for the trend of publications is R2=0.133. This result clearly shows that the number of publications in the last ten years has been going up slightly. However, it is difficult to consider this result as spectacular.

Based on a frequency analysis it was found that most of the contents of the publications which qualified for analysis were of a theoretical and review nature (22 publications-foreign bases, 41 publications out of 43 analysed-Polish bases). They were reviews of the definitions or the current state of knowledge on crowdsourcing. The remaining publications were articles that present the results of original research of an empirical nature, in particular, case studies or descriptions of events. This statement also concerns national publications: most of them, apart from the theoretical layer, which constitutes a literature review, included descriptions of examples of good practices or quoted data of the Central Statistical Office of Poland.

It is pointed out in the literature that the measurement of crowdsourcing is a great challenge for the researcher (Cullina et al., 2015) – some authors include the measurement of this particular term among the most difficult ones (Hirth et al., 2015). Despite the difficulties, it is an important and significant issue. As ascertained by Afuah and Tucci (Afuah & Tucci, 2012), studying crowdsourcing is promising and it may be a source of theoretical, empirical, and scientific knowledge. This is because the measurement will enable organizations to obtain different advantages (Malone, Laubacher & Dellarocas, 2010), but above all, it will contribute to understanding this phenomenon (Wilson, 2015). However, despite the recommendations and need for conducting research in this regard, in the existing research output there is a lack of conceptual coherence, an insufficiency related to a holistic perspective of crowdsourcing which hinders making comparisons, a diversity of methodological approaches, as well as inaccuracy and inadequacy in the measurement methods used. In the author’s opinion there is a need for identifying the inaccuracies, and limitations should be one of the key guidelines taken into account when formulating the methodological indications and recommendations for future research projects.

The first problem is the nature of the studied phenomenon alone. The differences between the authors result from a number of difficulties, adopted definitions, or conceptualization. In addition, a significant part of the existing measurement proposals does not refer to the notion’s conceptualization. It is only limited to the choice of variables, without indicating the multidimensionality or an ambiguous nature. The importance of the process of arriving at understanding the term, identifying, and defining how a given term is understood should be emphasized in the research procedure (Babbie, 2008). Conceptualization, which is discussed here, is a process of agreeing on the meaning of the terms. Its result is giving meaning to a term by means of indicators, and thus marks of presence of the notion studied. It constitutes the necessary condition for further actions in favor of operationalization and implementing empirical research on crowdsourcing. The most often quoted paper related to crowdsourcing defines it as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively) but is also often undertaken by sole individuals” (Howe, 2008). In his, work Howe points out the affinity of outsourcing, peer-production, and crowdsourcing.

Nonetheless, Howe’s approach seems to have some constraints. The author indicates the similarity between crowdsourcing and outsourcing and peer-production. In outsourcing, a supplier, selected by the organization, carries out some specific actions in accordance with the requirements and agreement. Peer-production assumes decentralization of tasks, large dispersion of the team, independent choice of tasks based on a self-assessment of skills and interests and treating the products or services being created as common goods available for a wider circle of recipients. In crowdsourcing, we are dealing with the crowd which is difficult to specify or define. Concluding agreements may prove to be impossible. Crowdsourcing should, however, be considered as a broader term: the crowd can focus its actions also on other activities. In the author’s opinion, Howe’s concept may be assumed as the principal definition, but at the same time connecting crowdsourcing with outsourcing and peer-production should be rejected.

The next issue is the need for a holistic look at crowdsourcing – this is required by the high level of this term’s complexity. What is more, the methodological rigor requires its development according to the rule of continuity, namely taking into consideration previous studies. The research on crowdsourcing is not made easier due to its multidimensionality and many-sidedness (Cullina et al., 2015). The authors Zhao and Zhu (2014), based on a review of 55 articles from the years 2006-2011 devoted to crowdsourcing, ascertain that future research should be conducted taking into account three perspectives or levels: organization (acceptance, implementation, management, quality, evaluation), technology (incentive mechanisms, technological issues), and participation (crowd motivation, organization employees’ behaviors). However, in all of the existing research, a simplified approach has been used which is limited to one level, selected by the researchers. Among the thirty-two research projects (foreign bases) which refer crowdsourcing, twenty-five take into account the participant level, especially the virtual communities, five take into account the technological level, and two the organizational one. In the case of domestic bases, two research projects consider the virtual community level. Such limitations omit the holistic look at crowdsourcing or even the relationships between each level. The organizational level cannot exist without the technological level. In turn, the participant’s level may exactly be a result of the organizational level. Instead of studying each level separately, future research should expand the scope of study by introducing new measurement scales as well as new mediation and moderation variables. Only on this basis may one conduct a detailed study of the elements and their mutual relations.

The second problem concerns the methodological approach used by researchers. The existing research on crowdsourcing is conducted according to a constructivist belief. It assumes that the nature of social reality is subjective and it only exists owing to an agreement between people. From this point of view, reality is changeable and not durable. Organizational cognition results from constantly occurring activity of the organization’s members in the reality which surrounds them. Organizations should be treated as systems of knowledge composed of the knowledge of the members of these organizations and the social interactions between them (Petit & Huault, 2008). Such an approach to the issue decreases the coherence of the empirical research conducted in the literature and it reveals a peculiar research gap, consisting in the lack of an overall look at crowdsourcing.

The third problem is the research tool. There is still a lack of an unequivocal standpoint when it comes to the method of measuring crowdsourcing (Exel, Dias, & Fruijtier, 2011). The existing measurement tools differ depending on the research study. Tendencies are being observed to develop one’s own instruments or adapting the existing ones. Scientists attempted to study crowdsourcing including examples of good practices and case studies (Brabham, 2008; Leimeister et al., 2009; Huang, White, & Dumais, 2011; Jain et al., 2011; Hutter, Hautz, Füller, Mueller, & Matzler, 2011; Yang, Ackerman, & Adamic, 2011; Zheng et al., 2011; Hung, Lai, & Cho, 2014; Munro, 2012; Rotman, 2012; Mason & Suri, 2012; Shao, Shi, Xu, & Liu, 2012; Sun, Wang, Yin, & Che 2012; Tokarchuk, Cuel, & Zamarian, 2012; Mortara, Ford, & Jaeger, 2013). In addition, experiments were conducted; Horton, 2011; Blohm, Leimeister, & Krcmar, 2013; Morris, Dontcheva, & Gerber, 2012; Franke, Lettl, Roiser, & Tuertscher 2013; Kazai, Kamps, & Milic-Frayling, 2013). The researchers also used survey questionnaires, and interviews (Sun et al., 2012). Nevertheless, most papers were conceptual publications. The research ones focused on task designing (Zheng et al., 2011), motivation problems (Leimeister et al., 2009; Frey, Lüthje, & Haag, 2011; Moris et al., 2012; Zheng et al., 2011; Zhao & Zhu, 2014), systems (Satzger et al., 2013; Hong & Pavlou, 2012), task coordination (Skopik, Schall, & Dustdar, 2010; Schall, 2012; Satzger et al., 2013), control of work quality and results (Satzger et al., 2013; Műller, 2010; Ryu & Lease, 2011; Xu et al., 2012).

As a result, one observes a multitude of approaches, which have their limitations. In order to indicate them, two purposefully chosen articles were analyzed: one which takes into account the quantitative studies, whereas the second one the qualitative studies. For example, in the quantitative approach, the researchers search for a model of statistically proven relationship between the variables. A contribution to testing this doubt is the research on crowdsourcing conducted by Yejun Xu, Enrique Ribeiro-Soriano and Gonzalez-Garcia in 2013. The research sample included the biotechnical industry and telecommunications enterprises operating in the Chinese market. A questionnaire survey was conducted, composed of 8 questions. The sample size was 393 enterprises (201 from the biotechnological industry and 192 from the telecommunications market). Hypotheses were studied which concerned the relationship between crowdsourcing and innovative competencies, the key competencies according to J. Schumpeter’s approach, continuous improvement of competences. Taking into account the lack of earlier research studies and by the same token lack of measurement tools – the authors used the Delphi technique. They invited 24 experts: company managers, biotechnological and telecommunications industry specialists, and professors who deal with research on crowdsourcing. They received a proposal of a survey questionnaire composed of 14 items. Finally, they were reduced to 8 items. A Likert-type scale was used, with a range of five points from 1 – “much worse” to 5 – “much better”. The obtained results confirm the relationships between crowdsourcing and the dependent variables distinguished during the study. The quoted results of empirical studies have their limitations. Firstly, the test sample, which includes just two industries in China, and therefore, is potentially affected by factors specific to these types of entities. In addition, markets were considered in which expenditure on research and development is very high. Secondly, the respondents were managers of the highest level responsible for the whole enterprise, which could have distorted their perception of crowdsourcing in the direction of innovations or cost optimising. Thirdly, limiting crowdsourcing to 8 items may constitute another limitation. Asking the respondents if they possess a crowdsourcing platform and whether they possess security systems protecting against data leakage, seems inadequate. Fourthly, it needs to be borne in mind that the quantitative approach has some specific threats, inter alia: it does not enable disclosing the best combinations and the most effective strategies. Testing hypotheses are connected with searching for existing models of dependencies and, to a limited extent, it formulates their practical implications.

It is pointed out in the literature that a case study may be used for identifying motivation in crowdsourcing, complementing internal competencies, acquiring ideas, solving problems, impact on business models, obtaining benefits for the organization and its clients, knowledge production, and collaboration with various entities (Flyvbjerg, 2004; Sarker, Xiao, & Beaulieu, 2013; Yin, 2013). One example is the studies by Schlagwein and Bjørn-Andersen conducted at LEGO. The study covered the LEGO Cuusoo platform. In the years 2010-2014, a total of 19 in-depth interviews were conducted with managers and 25 informal personal discussions or online discussions with 25 internal and external stakeholders. It aimed to identify the importance of crowdsourcing for organizational learning. The authors ascertained that the necessities connected with explaining research issues, complex cause and effect relations, the researcher’s interest in a contemporary phenomenon and its context, not evident borders between them, lack of possibility to influence them, and the need to evaluate the studied phenomena, support the choice of a case study as the research method. The case study may constitute an answer to the arising problems related to measuring crowdsourcing, i.e., early stage of knowledge development, need to identify the phenomenon in a given context, unclear borders between the phenomenon and its context, developing the existing theory, explaining phenomena, which have not been identified so far, analysis of organization behaviors, testing theory and understanding the circumstances of events, processes without conducting any manipulation related to their course. Moreover, the case study is a useful research method when testing hypotheses, particularly the hypothesis, which supposes the existence of a necessary condition and sufficient condition. This means orientation on preparing the actions of the decision-maker, studying the issues connected with the context of a given phenomenon and people’s behaviors participating in it.  

In conclusion, a synthesis of the above-mentioned considerations and the identified methodological challenges enables bringing to light some methodological guidelines for future research on crowdsourcing:

  1. The measurement tool should cover by its range all three crowdsourcing levels, i.e., organizational, technological, and participant.
  2. A quantitative-qualitative approach may make it possible to achieve testing and theory creating goals. For instance, a multiple case study may be a reference to the recommendations of other researchers identified, based on the systematic literature review. The quantitative-qualitative approach is recommended by Brabham and it will enable the expansion of knowledge on crowdsourcing.
  3. Research should be conducted taking into account the constructivist paradigm. Such approach to the issue increases the coherence of the empirical studies conducted in the literature and fills in the specific research gap, consisting of the lack of a comprehensive look at crowdsourcing.

CONCLUSION

Based on the conducted systematic literature review a few methodological guidelines for future studies on crowdsourcing may be proposed.

Firstly, crowdsourcing is a relatively new concept. Focusing on theoretical considerations means that in the theoretical and practical aspects there is still “terminological chaos,” that too high an advantage of theoretical approaches persists, and many areas are completely untouched or poorly clarified in the literature. Most domestic publications focused on an analysis of best practices, also showing the benefits coming from crowdsourcing and the possibilities of its use.

Secondly, the crowdsourcing analysis presented in this elaboration points to generally accepted convictions in the literature on the impact of crowdsourcing on innovativeness or competences. Nevertheless, despite the recommendations included in the subject literature, crowdsourcing is not formulated holistically. The empirical study of these relations focuses only on individual levels: organization, technology, and participant. Thus, the identified relations will take place only for one, chosen level. An answer to these problems may be the quantitative and qualitative approach – on the one hand; it will enable achieving the testing goals and, on the other, the theory genic ones.

Thirdly, the multitude of crowdsourcing definitions or interpretations does not facilitate the development of adequate measurement tools. Nevertheless, there is a need to analyze the existing methods of measuring crowdsourcing, elimination of limitations and inaccuracies of the present methodologies of measuring crowdsourcing – based on it developing an appropriate measurement method taking into account the realization of goals of other researchers’ work. This is important due to the fact that a proper definition and, next, operationalization, constitute the basis for conducting a proper measurement of this interesting, although difficult, concept.

Acknowledgments

This project was financed from funds provided by the National Science Centre, Poland awarded on the basis of decision number DEC-2016/21/D/HS4/01791.

References

Afuah, A., & Tucci, C. L. (2013). Value capture and crowdsourcing. Academy of Management Review, 38(3),  457-460.

Aitamurto, T., Leiponen, A., & Tee, R. (2011). The promise of idea crowdsourcing: Benefits, contexts, limitations. (White paper June 2011). Retrieved from https://www.researchgate.net/profile/Tanja_Aitamurto/publication/257926136_The_Promise_of_Idea_Crowdsourcing-Benefits_Contexts_Limitations/links/5698979208ae34f3cf1f5c58.pdf

Archak, N., & Sundararajan, A. (2009). Optimal design of crowdsourcing contests. Proceedings of the Thirtieth International Conference on Information Systems, Phoenix. Retrieved from https://pdfs.semanticscholar.org/7d3f/aa010f20beccee99a043ebc60595e8a48bc9.pdf

Blohm, I., Leimeister, J., & Krcmar, M. H. (2013). Crowdsourcing: How to benefit from (too) many great ideas. MIS Quarterly Executive, 12(4), 199-211.

Brabham, D. C. (2008). Crowdsourcing as a model for problem solving: An introduction and cases, convergence. The International Journal of Research into New Media Technologies, 14(1), 75-90.

Burger-Helmchen, T., & Pénin, J. (2010). The limits of crowdsourcing inventive activities: What do transaction cost theory and the evolutionary theories of the firm teach us? Workshop on Open Source Innovation. France: Strasbourg. Retrieved from http://www.academia.edu/download/5874711/tbh_jp_crowdsouring_2010_eng.pdf

Chanal, V., & Caron-Fasan, M. L. (2008). How to invent a new business model based on crowdsourcing: The Crowdspirit ® case. Conférence de l’Association Internationale de Management Stratégique, May 2008, Sophia-Antipolis. Retrieved from https://hal.archives-ouvertes.fr/halshs-00486794/

Charreire-Petit, S., & Huault, I. (2008). From practice-based knowledge to the practice of research: Revisiting constructivist research works on Knowledge. Management Learning, 39(1), 73-91.

Colombo, G., Buganza, T., Klanner, I. M., & Roiser, S. (2013). Crowdsourcing intermediaries and problem typologies: An explorative study. International Journal of Innovation Management17(2), 1-24.

Cullina, E., Conboy, K., & Morgan, L. (2015). Measuring the crowd – A preliminary taxonomy of crowdsourcing metrics. Proceedings of the 11th International Symposium on Open Collaboration, OpenSym, ACM. Retrieved from http://www.opensym.org/os2015/proceedings-files/p200-cullina.pdf

Czakon, W. (2006). Łabędzie Poppera – case studies w badaniach nauk o zarządzaniu. Przegląd Organizacji, 9, 9-13.

Czakon, W. (2011). Podstawy Metodologii Badań w Naukach o Zarządzaniu. Warszawa: Wolters Kluwer.

DiPalantino, D., & Vojnovic, M. (2009). Crowdsourcing and all-pay auctions. Proceedings of the 10th ACM International Conference on Electronic Commerce. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.483.9230&rep=rep1&type=pdf

Dul, J., & Hak, T. (2008). Case Study Research Methodology in Business Research. Oxford: Butterworth-Heinemann.

Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from case studies: Opportunities and challenge. Academy of Management Journal, 50(1), 25-32.

Flyvbjerg, B. (2004). Five misunderstandings about case-study research, In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.), Qualitative Research Practice. CA: Sage Thousand Oaks.

Franke, N., Lettl, C., Roiser, S., & Tuertscher, P. (2014). Does God play dice? Randomness vs. deterministic explanations of idea originality in crowdsourcing. Proceedings of the 35th DRUID Celebration Conference. Retrieved from https://journals.aom.org/doi/abs/10.5465/ambpp.2014.235

Frey, K., Lüthje, C., & Haag, S. (2011). Whom should firms attract to open innovation platforms? The role of knowledge diversity and motivation. Long Range Planning, 44(5), 397-420.

Gloor, P. A., & Cooper, M. S. (2007). The new principles of a swarm business. MIT Sloan Management Review, 48(3), 81-84.

Halder, B. (2014). Evolution of crowdsourcing: Potential data protection, privacy and security concerns under the new Media Age. Revista Democracia Digital e Governo Eletronico, 1(10), 377–393.

Heer, J., & Bostock, M. (2010). Crowdsourcing graphical perception: Using mechanical turk to assess visualization design. Proceedings of ACM Conference on Human Factors in Computing Systems. Retrieved from http://www.cs.kent.edu/~javed/class-P2P12F/papers-2012/PAPER2012-2010-MTurk-CHI.pdf

Hetmank, L. (2013). Components and functions of crowdsourcing systems - A systematic literature review. Proceeding of the 11th International Conference on Wirtschaftsinformatik. Retrieved from https://pdfs.semanticscholar.org/3ba7/609d5d6e2794f648228515f4739e9d1a3622.pdf

Hong, Y., & Pavlou, P. A. (2014). Product fit uncertainty in online markets: Nature, effects, and antecedents. Information Systems Research, 25(2), 328-344.

Horton, J., & Chilton, L. (2010). The labor economics of paid crowdsourcing. Proceedings of the 11th ACM conference on Electronic commerce (pp. 209-218). Retrieved from https://arxiv.org/pdf/1001.0627

Horton, J.J. (2011). The condition of the Turking class: Are online employers fair and honest?. Economics Letters, 111(1), 10-12.

Howe, J. (2006).The rise of crowdsourcing. Wired Magazine, 14(6), 1-4

Howe, J. (2008). Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. New York: Three Rivers Press.

Huang, J., White, R. W., & Dumais, S. (2011). No clicks, no problem: Using cursor movements to understand and improve search. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1225-1234). Retrieved from https://dl.acm.org/citation.cfm?id=1979125

Hutter, K., Hautz, J., Füller, J., Mueller, J., & Matzler, K. (2011). Communication: The tension between competition and collaboration in community-based design contests. Creativity and Innovation Management, 20(1), 3-21.

Jain, R. (2010). Investigation of governance mechanisms for crowdsourcing initiatives. AMCIS 2010 Proceedings, Paper 557. Retrieved from http://www.virtual-communities.net/mediawiki/images/f/fd/Jain.pdf

Kazai, G., Kamps, J., & Milic-Frayling, N. (2011). Worker types and personality traits in crowdsourcing relevance labels.Proceedings of the 20th ACM International Conference on Information and Knowledge Management (pp. 1941-1944). Retrieved from https://dl.acm.org/citation.cfm?id=2063860

Kleemann, F., Voß, G. G., & Rieder, K. (2008). Un(der)paid innovators: The commercial utilization of consumer work through crowdsourcing. Science, Technology & Innovation Studies, 4(1), 5-26.

Lai, H.-M., & Chen, T. T. (2014). Knowledge sharing in interest online communities: A comparison of posters and lurkers. Computers in Human Behavior, 35, 295-306.

Leicht, N., Durward, D., Blohm, I., & Leimeister, J. M. (2015). Crowdsourcing in software development: A state-of-the-art analysis. Proceedings of 8th Bled eConference. Retrieved from https://domino.fov.uni-mb.si/proceedings.nsf/Proceedings/B31112FAB95D7A1FC1257E5B004BDC42/$File/2_Leicht.pdf

Leimeister, J. M., & Zogaj, S. (2013). Neue Arbeitsorganization durch Crowdsourcing. Eine Literaturstudie. Düsseldorf: Hans-Böckler-Stiftung.

Leimeister, J. M., Huber M., Bretschneider U., & Krcmar H. (2009). Leveraging crowdsourcing: Activation-supporting components for IT-based ideas competition. Journal of Management Information Systems, 26(1), 197-224.

Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon’s mechanical turk. Behavior Research Methods, 44(1), 1-23.

Morris, R. R., Dontcheva, M., & Gerber, E. M. (2012), Priming for better performance in microtask crowdsourcing environments. Internet Computing, 16(5), 13-19.

Mortara, L., Ford, S. J., & Jaeger, M. (2013). Idea competitions under scrutiny: Acquisition, intelligence or public relations mechanism? Technological Forecasting and Social Change, 80(8), 1563-1578.

Munro, R. (2012). Crowdsourcing and the crisis-affected community: Lessons learned and looking forward from mission. Journal of Information Retrieval, 16(2), 210-266.

Pizło, W. (2009). Studium przypadku jako metoda badawcza w naukach ekonomicznych. Roczniki Naukowe Stowarzyszenia Ekonomistów Rolnictwa i Agrobiznesu, 11(5), 246-251.

Quinn, A. J., & Bederson, B. B. (2011). Human computation: A survey and taxonomy of a growing field. Proceeding CHI ‘11 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1403-1412). Retrieved from http://crowdsourcing-class.org/readings/downloads/intro/QuinnAndBederson.pdf

Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., & Jacobs, D. (2012). Dynamic changes in motivation in collaborative citizen-science projects. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, 217. Retrieved from http://www.cs.umd.edu/hcil/trs/2011-28/2011-28.pdf

Ryu, H., Lease, M. (2011). Crowdworker filtering with support vector machine.Proceedings of the American Society for Information Science and Technology, 48(1), 1-4

Sarker, S., Xiao, X., & Beaulieu, T. (2013). Qualitative studies in information systems: A critical review and some guiding principles. MIS Quarterly, 37(4), 3-18.

Satzger, B., Psaier, H., Schall, D., & Dustdar, S. (2013). Auction-based crowdsourcing supporting skill management. Information Systems, 38(4), 547-560.

Schall, D. (2012). Service-Oriented Crowdsourcing - Architecture, Protocols and Algorithms. Springer Briefs in Computer Science. New York: Springer.

Schenk, E., & Guittard, C. (2009). Crowdsourcing: What can be outsourced to the crowd, and why? Journal of Innovation Economics, 1(7), 93-107.

Schlagwein, D., & Bjorn-Andersen, N. (2014). Organizational learning with crowdsourcing: The revelatory case of LEGO. Journal of the Association for Information Systems, 15(11), 754-778.

Shao, B., Shi, L., Xu, B., & Liu, L. (2012). Factors affecting participation of solvers in crowdsourcing: An empirical study from China. Electronic Markets, 22(2), 73-82.

Sims, J., & Crossland, C. (2010). Partners or Pariahs? Firm engagement with open innovation communities. Academy of Management Annual Meeting (AOMM’10). Canada: Montreal.

Skopik, F., Schall, D., & Dustdar, S. (2010). Modeling and mining of dynamic trust in complex service-oriented systems. Information Systems, 35(7), 735-757.

Sloane, P. (2011). A Guide to Open Innovation and Crowdsourcing: Advice from Leading Experts, UK: Kogan Page Publishers.

Stake, R. (1995). The Art of Case Study Research. CA: Sage, Thousand Oaks.

Steiger, E., Albuquerque, J. P., & Zipf, A. (2015). Twitter as a location-based social network – An advanced systematic literature review on spatiotemporal analyses of Twitter data. Transactions in GIS, 19(6), 809-834.

Stol, K., & Fitzgerald, B. (2014). Two’s company, three’s a crowd: A case study of crowdsourcing software development. Proceedings of the 36th International Conference on Software Engineering (pp. 187-198). Retrieved from https://ulir.ul.ie/bitstream/handle/10344/3982/fitzgerald_2014_company.pdf?sequence=2

Sun Y., Wang N., Yin C. X., & Che T. (2012). Investigating the non-linear relationships in the expectancy theory: The case of crowdsourcing marketplace. Proceedings of the AMCIS Proceedings. Paper 6. Retrieved from https://pdfs.semanticscholar.org/655c/493805a2bfe83b63657780f18ed91f4397e9.pdf

Surowiecki, J. (2004). The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations. New York: Doubleday.

Tapscott, D., & William, A. D. (2007). Wikinomics: How Mass Collaboration Changes Everything. New York: Portfolio, Penguin.

Tokarchuk, O., Cuel, R., & Zamarian, M. (2012). Analyzing crowd labor and designing incentives for humans in the loop. Internet Computing, IEEE, 16, 45-51.

Vukovic, M., Mariana, L., & Laredo, J. (2009). People cloud for the globally integrated enterprise. In: D. Asit, F. Gittler, & F. Tourmani (Eds.), Service-Oriented Computing. Berlin: Springer.

West, J., Salter, A., Vanhaverbeke, W., & Chesbrough, H. (2014). Open innovation: The next decade. Research Policy, 43(5), 805-811.

Wexler, M. N., (2011). Reconfiguring the sociology of the crowd: Exploring crowdsourcing. International Journal of Sociology and Social Policy, 31(1/2), 6-20.

Whitla, P. (2009). Crowdsourcing and its application in marketing activities. Contemporary Management Research, 5(1), 15-28.

Wilcox, R.T. (2000). Experts and amateurs: The role of experience in Internet auctions. Marketing Letters, 11(4), 363-374.

Xu, Y.,  Ribeiro-Soriano, E. D.,  & Gonzalez-Garcia, J. (2015). Crowdsourcing, innovation and firm performance. Management Decision, 53(6), 1158-1169.

Yang, J., Ackerman, M. S., & Adamic, L. A. (2011). Virtual gifts and guanxi: Supporting social exchange in a Chinese online community. Proceeding of the ACM Conference on Computer Supported Cooperative Work (pp. 45-54). Retrieved from https://socialworldsresearch.org/sites/default/files/Yang-VirtualPoints-CSCW11.final_.pdf

Yang, J., Adamic, L.A., & Ackerman, M.S. (2008). Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn. Proceedings of ACM Electronic Commerce (pp. 246-255). Retrieved from http://web.eecs.umich.edu/~ackerm/pub/08b49/yang-witkey-ec.final.pdf

Yin, R. K. (2014). Case Study Research: Design and Methods(5th Ed.). Thousand Oaks, CA, USA: Sage Publications.

Zhao, Y., & Zhu, Q. (2014). Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers, 16(3), 417-434.

Zheng, H. L., & Dahui, H. W. (2011). Task design, motivation, and participation in crowdsourcing contests. International Journal of Electronic Commerce, I15(4), 57-88.

Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405.

Abstrakt

Crowdsourcing jest pojęciem stosunkowo nowym i pomimo zainteresowania badaczy nadal niewiele o nim wiadomo. Obserwuje się jednocześnie trudności natury poznawczej i praktycznej. Stało się to przesłanką do podjęcia refleksji na temat metodologii badań nad tym pojęciem. Przedmiotem artykułu jest identyfikacja dotychczasowych procedur badania crowdsourcingu, ze szczególnym uwzględnieniem wyzwań metodologicznych, jakie mogą pojawić się przed badaczami tego pojęcia. Artykuł powstał w oparciu o systematyczny przegląd literatury. Jego wyniki pozwoliły sformułować pewne wskazówki metodologiczne dla dalszych badań. Badania powinny być prowadzone z uwzględnieniem trzech poziomów crowdsourcingu: organizacja, technologia, and uczestnictwo. Dodatkowo podejście ilościowo-jakościowe pozwoli na poszerzenie wiedzy o crowdsourcingu.

Słowa kluczowe: crowdsourcing, metodologia, procedura badań, metody badań

Biographical note

Regina Lenart-Gansiniec, Ph.D., Assistant Professor at Jagiellonian University, Institute of Public Affairs. Expert in open innovation, knowledge management, clusters and public management of the Ministry of Economic Development (Poland) and Ministry of Economy (Poland). Research interests include open innovation, crowdsourcing, knowledge management, and organizational learning in public organizations. An author of publications on knowledge management, crowdsourcing, open innovation who has participated in several research projects.