Gartner’s hype cycle and citizen science

Google Trends 'Citizen Science' (July 2013)

Google Trends ‘Citizen Science’ (July 2013)

The term ‘Citizen Science’ is clearly gaining more recognition and use. It is now get mentioned in radio and television broadcasts, social media channels as well as conferences and workshops. Some of the clearer signs for the growing attention include discussion of citizen science in policy oriented conferences such as UNESCO’s World Summit on Information Society (WSIS+10) review meeting discussion papers (see page ), or the Eye on Earth users conference (see the talks here) or the launch of the European Citizen Science Association in the recent EU Green Week conference.

There are more academic conferences and publications that cover citizen science, a Google Plus community dedicated to citizen science with 1400 members, a clear trend in Google searches and so on.

Another aspect of the expanding world of citizen science is the emerging questions from those who are involved in such projects or study them about the efficacy of the term. As is very common with general terms, some reflections on the accuracy of the term are coming to the fore – so Rick Bonney and colleagues suggest to use ‘Public Participation in Scientific Research‘ (significantly, Bonney was the first to use ‘Citizen Science’ in 1995); Francois Grey coined Citizen Cyberscience to describe projects that are dependent on the Internet; recently Chris Lintott discussed some doubts about the term in the context of Zooniverse;  and Katherine Mathieson asks if Citizen Science is just a passing fad. In our own group, there are also questions about the correct terminology, with Cindy Regalado suggestions to focus on ‘Publicly Initiated Scientific Research (PIScR)‘, and discussion on the meaning of ‘Extreme Citizen Science‘.

Gartner Hype Cycle

One way to  explore what is going on is to consider the evolution of the ‘hype’ around citizen science throughGartner’s Hype Cycle‘  which can be seen as a way to consider the way technologies are being adopted in a world of  rapid communication and inflated expectations from technologies. leaving aside Gartner own hype, the story that the model is trying to tell is that once a new approach (technology) emerges because it is possible or someone reconfigured existing elements and claim that it’s a new thing (e.g. Web 2.0), it will go through a rapid growth in terms of attention and publicity. This will go on until it reaches the ‘peak of inflated expectations’ where the expectations from the technology are unrealistic (e.g. that it will revolutionize the way we use our fridges). This must follow by a slump, as more and more failures come to light and the promises are not fulfilled. At this stage, the disillusionment is so deep that even the useful aspects of the technology are forgotten. However, if it passes this stage, then after the realisation of  what is possible, the technology is integrated into everyday life and practices and being used productively.

So does the hype cycle apply to citizen science?

If we look at Gartner cycle from last September, Crowdsourcing is near the ‘peak of inflated expectations’ and some descriptions of citizen science as scientific crowdsourcing clearly match the same mindset.

Gartner 2012 Hype Cycle

There is a growing evidence of academic researchers entering citizen science out of opportunism, without paying attention to the commitment and work that is require to carry out such projects. With some, it seems like that they decided that they can also join in because someone around know how to make an app for smartphones or a website that will work like Galaxy Zoo (failing to notice the need all the social aspects that Arfon Smith highlights in his talks). When you look around at the emerging projects, you can start guessing which projects will succeed or fail by looking at the expertise and approach that the people behind it take.

Another cause of concern are the expectations that I noticed in the more policy oriented events about the ability of citizen science to solve all sort of issues – from raising awareness to behaviour change with limited professional involvement, or that it will reduce the resources that are needed for activities such as environmental monitoring, but without an understanding that significant sustained investment is required – community coordinator, technical support and other aspects are needed here just as much. This concern is heightened by  statements that promote citizen science as a mechanism to reduce the costs of research, creating a source of free labour etc.

On the other hand, it can be argued that the hype cycle doesn’t apply to citizen science because of history.  Citizen science existed for many years, as Caren Cooper describe in her blog posts. Therefore, conceptualising it as a new technology is wrong as there are already mechanisms, practices and institutions to support it.

In addition, and unlike the technologies that are on Gartner chart, academic projects within which citizen science happen benefit from access to what is sometime termed patient capital without expectations for quick returns on investment. Even with the increasing expectations of research funding bodies for explanations on how the research will lead to an impact on wider society, they have no expectations that the impact will be immediate (5-10 years is usually fine) and funding come in chunks that cover 3-5 years, which provides the breathing space to overcome the ‘through of disillusionment’  that is likely to happen within the technology sector regarding crowdsourcing.

And yet, I would guess that citizen science will suffer some examples of disillusionment from badly designed and executed projects – to get these projects right you need to have a combination of domain knowledge in the specific scientific discipline, science communication to tell the story in an accessible way, technical ability to build mobile and web infrastructure, understanding of user interaction and user experience to to build an engaging interfaces, community management ability to nurture and develop your communities and we can add further skills to the list (e.g. if you want gamification elements, you need experts in games and not to do it amateurishly). In short, it need to be taken seriously, with careful considerations and design. This is not a call for gatekeepers , more a realisation that the successful projects and groups are stating similar things.

Which bring us back to the issue of the definition of citizen science and terminology. I have been following terminology arguments in my own discipline for over 20 years. I have seen people arguing about a data storage format for GIS and should it be raster or vector (answer: it doesn’t matter). Or arguing if GIS is tool or science. Or unhappy with Geographic Information Science and resolutely calling it geoinformation, geoinformatics etc. Even in the minute sub-discipline that deals with participation and computerised maps that are arguments about Public Participation GIS (PPGIS) or Participatory GIS (PGIS). Most recently, we are debating the right term for mass-contribution of geographic information as volunteered geographic information (VGI), Crowdsourced geographic information or user-generated geographic information.

It’s not that terminology and precision in definition is not useful, on the contrary. However, I’ve noticed that in most cases the more inclusive and, importantly, vague and broad church definition won the day. Broad terminologies, especially when they are evocative (such as citizen science), are especially powerful. They convey a good message and are therefore useful. As long as we don’t try to force a canonical definition and allow people to decide what they include in the term and express clearly why what they are doing  is falling within citizen science, it should be fine. Some broad principles are useful and will help all those that are committed to working in this area to sail through the hype cycle safely.