Giving time – randomised experiments on volunteering and citizen social science

As the event blurb explained  “the Giving Time experiments were led by a team from four UK universities, who wanted to know whether sharing information about how others have volunteered could help to improve volunteering… this was about giving time – and whether volunteers can be nudged. The methodology was randomised control trial (RCTs) in real-life field settings involving university student volunteers, Parish Councils, National Trust volunteers, and housing association residents.  The research was funded by the Economic and Social Research Council (ESRC).” The discussion of RCTs and Citizen Science in the same event was bound to generate interesting points.

In the first session, Prof Peter John (UCL) discussed The research challenges of large scale RCTs with volunteers and volunteering organisations. Peter covered the principles for Randomised Control Trials  (RCTs) – using randomness in trying something: assuming that two random groups will behave the same if you leave them alone, so you do things only to one group and observe the results. Start with baseline, random allocation to programme and control group, and then compare the outcome. Tying the outcomes to random allocation and – they are unbiased estimates of the impact of outcomes. Key distinguishing features of RCTs: need to deliver an intervention and the research at the same time. He suggests a 10 steps process – assessment of fit for RCTs, recruitment of partner organisations in which the work will be carried out, select a site, decide treatment, specify control, calculation of sample size, develop the procedure for random allocation, collection of data on the subjects, preparation of research plans, and assessment of ethical principles. The things can go wrong include: loss of subjects – people drop out along the way; failed randomization – deciding on who will be included in the process; treatment not given or modified; interference between treatment and control – when the groups meet; unavoidable confounds – when something come along in policy or media and policy change; poor quality data – what the data mean and what is going on with it; loss of cooperation with partners; and unexpected logistical challenges.
The Giving Time was the first RCTs on volunteering experiments – volunteering is more complex than giving money. The question is if behavioural methods can impact on the changes in the process. Working with the volunteering sector was challenging as they don’t have detailed records of volunteers that can be used to develop RCTs. There was willingness to participate in experiments and it was quite interesting to work with such organisations. There was a high level of attrition for those who are staying in the study – just getting volunteers to volunteer – from getting people to be interested until they do something. Is it possible to make it easier, get better quality data? RCTs required changes in organisational practices – if they are information based they are not hugely costly. It is possible to design trials to be sensitive to organisational practice and can be used quickly in decision making. There are issues with data protection and have a clear data sharing agreement.

Against this background, the second session Towards ‘Extreme Citizen Social Science’ – or volunteering as a tool for both social action and enquiry explored a contrasting approach. The session description already explored challenge: “For many, the scale of engagement with volunteers undertaken through Giving Time brings to mind related questions about the role of citizens in formal research – and then of course Citizen Science – or perhaps ‘Citizen Social Science’? At the same time we see the emergence of “Extreme Citizen Science” aimed at stimulating debate and challenging power relationships through citizen involvement in large scale scientific investigations. Extreme citizen science often starts from natural and physical sciences and has citizen researchers working with formal researchers to define the central research questions, and methods of investigation. But what is the potential for Extreme Citizen Social Science – characterised by being large scale, focused on social science questions, exploiting digital technology, having a high degree of participant control, and orientated towards influencing policy?”

Liz Richardson (Manchester Uni) gave her view on citizen social science approach. She is doing a lot of participatory research, and you need to explore with participants what is accepted to do with them. We can solve problems in a better way, if we have conversations on wide knowledge base in science – e.g. – a rough guide to spotting bad science. Liz compared her experience to early memories of the RSPB Big Garden Bird Watch – the natural sciences version of citizen science, and part of it is access to back gardens and wide area research. She also reflected on her participation in Zooniverse and the confusion about what is the science there – e.g. why scientists ask which direction wildebeest look? There are different levels of engagement in citizen science classification, such as Haklay 2013 and a version in the book community research for participation – from low participation to high level. Citizen social science  – example for a basic one is the 2011 big class survey in the BBC – just giving and sharing information – more crowdsourcing. Another, more complex example is Christian Nold emotional maps when people responded to arousal measurement, part of evolution in visualising information and sharing mapping. The app MapLocal is used in local planning and sharing information by community members. Groups can also collect data and analyse it – they then work with social scientists how to make sense of data that they collected (work carried out with White Rock Trust in Hasting). It’s not research that done alone but integrated and leading to a change – it’s community consultation. An example is a game in Boston with Participatory Chinatown – and example for a community-led action research from the Morris Justice Project with support from academics.

I provided a presentation about extreme citizen science, positioning it within social science context (similar to my talk for the Institute for Global Prosperity) with some pointers to underlying social theory – especially that the approach that we take in contrast to some behaviour change approaches that take methodological individualism for granted.

Jemma Mouland (Family Mosaic) provided the provider point of view. Head of research at large social housing provider, with about 45,000 tenants. They have done project with Liz, and she explained it from provider point of view. Family Mosaic is looking at community involvement and decision making – what affect them in their daily life and where the housing provider come in? How to work more collaboratively with the residents. They run the a citizen science project around the meaning of community. They have done that through the Giving Time project – they sent email to recruit people to become citizen scientists – from 8000 people that received the message, 82 were interested, then 13 people were involved. They provided the material to carry out workshops, and didn’t instructed how to carry out the research – that led to 50 responses, although instructed to get at least 3, so some people moved beyond just 3. They also got the citizen scientists to analyse the data and the residents interpreted the data that they have gathered. The results from the survey – different definition of community, with active minority, and barriers include time and articulating the benefits (‘why should I do it?’). The residents felt that it was great, but they weren’t sure about doing it again – and also acting on behalf of the provider can be an issue, as well as feeling that all familiar contacts where used. The issue of skills is also interesting – gave very little information, and it can be effective to train people more. For Family Mosaic – the data was not ground breaking, but prove that collaboration can work and have a potential, it gave evident that it can work for the organisation.

So, *can* volunteers be nudged? Turning the spotlight on the future of Nudge techniques. Professor Gerry Stoker (Southampton Uni) The reasons for the lack of success of intervention was the use of the wrong tool and significant difference of money donation and time donation. Nudge come with a set of ideas – drawing on behavioural economics – we use short-cuts and tricks to make decision and we do what other do and then government followed it in a way to influence and work with people and change their behaviour. There are multiple doubts about nudge – nudge assumes fast thinking, but giving time is in slow thinking mode – donating money closer to type 1 (fast thinking) and volunteering closer to type 2 (slow thinking). Second, humans are not just cognitive misers – there are degrees of fast and slow thinking. Almost all nudging techniques are about compliance. Also it’s naive and overly promotional – and issues when the topic is controversial. The individual focus missed the social – changing people mind require persuasion. Complexity also make clear answers harder to find – internal and external validity, and there are very complex models of causality. There are ironic politics of nudge and experiments – allowed space only at the margins of policy making. Need to recognise that its a tool along other tools, and need to deal with groups side by side with other tools. Nudge is a combination with structural or institutional change, wider strategies of behaviour change, and not that other techniques are not without their own problems and issues

Discussion – need to have methodologies that are responsive to the local situation and context. A question is how do you nudge communities and not work at the individual level.

The final talk before the panel discussion was Volunteers will save us – volunteering as a panacea. Presenter: Dr Justin Davis-Smith (National Council for Voluntary Orgs) State of volunteering in 2015 – volunteering can lead to allow social transformations – e.g. ex-offenders being released to volunteering roles and that help avoiding offending. Another success is to involve people who are far from the job market to get employable skills through volunteering. Volunteering also shown that volunteers have better mental health and wellbeing. Not volunteering has a negative impact on your wellbeing. There are volunteering that can be based on prescription (e.g. Green Gyms). Volunteers are engaged in public services, such as special constables. Social capital is also improved through volunteering. Replacement value £40Bn, and the other impacts of volunteering are not being quantified so the full value is estimated at £200Bn. So volunteer will save us?
However, volunteering is cost effective but not without cost and require investment, which is difficult to make. The discussion about engagement of volunteers in public service put the volunteers against paid labour, instead of co-production. There are also unhealthy dynamic with paid staff if it only seen as cost-saving measure. We have a small core that provide their volunteering effort, and the vast majority of volunteering is made by a small group (work on civic core by the centre for third sector research was mentioned). The search for the panacea is therefore complex. Over effort of 15 years in different forms of volunteering, there is only 5% change in the amount people report about volunteering. Some of the nudge mechanisms didn’t work – there is a lot of evidence to show that campaign on volunteering don’t work well. People react negatively to campaigns. Barrier for volunteering is lack of time, and concerned that getting involved will demand more and more of their time. Reflecting on time constraints and micro-volunteering can work.

The final panel explored issues of co-production of research and the opportunities to work with volunteering organisations to start the process – many social services providers do want to have access to research but find it difficult to start the process.