Citizen Science in the Research Evaluation Framework impact studies

In the UK, every 5 years or so, there is a complex and expensive process that evaluates the work of academics in research institutions across the country, and rate them in terms of quality (see the infographics). The last round of this process was called ‘Research Evaluation Framework’ or REF for short. You don’t need to look far to find complaints about it, the measures that are used, the methodology and so on. I think that a lot of this criticism is justified, but this post is not about the process of the REF, but about the outcomes.

The REF included a requirement from universities to demonstrate their wider societal impact – beyond teaching, publishing academic papers or sharing research results. The societal impact includes lots of aspects, and while academics and evaluators are fixated on economic outcomes, impacts also include policy, influencing health and wellbeing, and engaging the public in scientific research. The writing of impact case studies was a major task for the academics that were selected to write them (about 1 in 10) and universities invested money and effort in picking up the best examples that they could find. When looking at these examples, we need to remember that they were submitted in 2013, so they cover the work done by universities until then.

According to a study that looked at these impact descriptions, out of the 6,975 cases, 447 (6.5%) are classified as ‘public engagement’ of all forms (e.g. a lecture). Within these cases, the database of impact case studies provides about 731 that use the term ‘public engagement’, 260 that use the term ‘participatory’, about 60 which include ‘public participation’ and 33 that include the ‘citizen science’ with few more that did not but are about it. While this is a tiny group (0.5%), it is still interesting to see what projects are included.

It is not surprising to find that  ecological projects such as Conker Tree Science, invasive species & the ladybird surveyThe Black Squirrel Project, or observing ants and spidersgrassland fungi, stag beetles, birds, and amphibians  were included. As expected, the Open Air Laboratories (OPAL) project is noted by the Universities of Central Lancashire, Birmingham, UCL, and the Open University (but surprisingly missing from the university that coordinated the effort – Imperial College). There are also apps such as the WildKnwledge recording app or the iSpot app.

Related environmental projects include monitoring peatland erosion or community volcanology. Also Community Archaeology and involvement in archaeology excavations can be considered as outdoor activities.

Volunteer thinking in the form of Zooniverse appeared several times from the Universities of Oxford, Portsmouth, and Sussex , while volunteer computing in the form of ClimatePrediction.net  is noted by two departments of University of Oxford – physics and computing). There are other astronomy projects such as Aurora Watch, or Gravitational Waves.

Other examples include our participatory mapping activities while UCL Anthropology highlighted the indigenous mapping activities, while DIY biology and DNA testing are also mentioned, and even projects in the digital humanities – the Oxyrhynchus papyri  or The Reading Experience Database.

What can we make out of this? I’d like to suggest few points:

The 30 or so projects that are described in the case studies offer a good overview of the areas where citizen science is active – ecology, volunteer thinking and volunteer computing. The traditional areas in which public participation in science never stopped – astronomy, archaeology, or nature observation are well represented. Also the major citizen science projects (OPAL, Zooniverse) also appear and as expected they are ‘claimed’ by more than one unit or university.

Mmore specialised citizen science such as participatory mapping, digital humanities or DIY biology is not missing, too.

On the downside, this is a very small number of cases, and some known projects are not listed (e.g. Imperial College not claiming OPAL). I guess that like many evaluation activities, the tendency of those evaluated is to be conservative and use terms that the evaluators will be familiar with. Maybe over the next five years citizen science will become more common, so we will see more of it in the next round.