UCL Fossil Fuel Divestment debate

UCL organised a debate about fossil fuel divestment, with 7 knowledgeable speakers (all professors), raising argument for an against the suggestion that UCL should divest from fossil fuels and sell its £21million invested in the industry. In the room and on the panel there were more people who supported the motion than those who opposed it. Interestingly, at the end of the discussion more people switched to support divestment. I took notes of the positions and what people mentioned as a way to map the different views that were expressed. So here are my notes, the tl;dr point of each argument and something about my view at the end of this longish post.

Anthony Costello opened the debate and discuss that research from UCL provided evidence to justify the Guardian ‘keep it in the ground‘ campaign. The aim of the debate is to explore different views and see what the general views of the people who attended it.

Richard Horton, the editor of the Lancet opened with some comments officially Chaired the debate – there is a movement around divestment from fossil fuel that is very rapidly growing across society and different places. Universities are special in their role in society – they are creator of knowledge about public policy issues, but they are also a moral space within society, where position can be taken. Some of UK universities decided to divest – Glasgow, SOAS. Other university didn’t decide – e.g. Oxford. It is appropriate to ask what UCL should do, it is leading on considering the impacts of climate change on society at large – e.g. risk to health?

Chris Rapley, opened with nothing that we are the first humans to breath at 400ppm of CO in the atmosphere as a basic composition  – it is above the levels for the last 800,000 years. 40% rise is the same increment between ice age and interglacial age. The change is taking place 100 times faster than anything natural. The conclusion is that it is unwise to increase above 2C from pre-industrial levels, and we have very little left to burn. 80% of coal and 50% of oil are unburnable – we don’t have a solution for carbon capture and storage yet. The first reason to divest is that it’s prudent – it’s the energy of the past, and renewable are the future. The valuations are a bubble so it is best to put the money elsewhere. second point, we need to be put on a trajectory away from fossil fuel by this December – and lot of issues play out in Paris will not be ratified until 2020. We need to connect the trajectory that we are currently on and the future one, so we do it properly. The CEOs of BP and Shell suggested business as usual, and the recent budget gave 1.2 billion to North Sea oil, so the government is not following its own statement. UCL as radical thinker need to do a gesture in the right direction. We are all part of a web of carbon intensive world, and we need to manage the transition.

My TL;DR Rapley: Science is showing the need combined with need to change trajectory 

Jane Holder argued for divestment from the point of view of a teacher of environmental law. The meaning for teaching and learning – the movement that has gone on for the past 20 and more years to increase environmental education at university level. Helping students to deal with contested knowledge, uncertainty and environmental issues. UCL has done a lot of work over the years – changes to the estate and the curriculum, from this perspective, the UCL campaign make a connection between the estate, curriculum and its finance. There is linkages between environmental education and the learning and teaching of UCL. Significance of informal curriculum – the intangible way in which institution instil values in students – there is publicness of university building and the way it treat staff. Secondly, there are broader changes in university – in terms of education, students explained that since tuition fees, the student is viewed as a consumer, and not citizens of the university community. The divestment campaign allow students to act as citizens and not consumer. University is a site for environmental activity and the roles should be combined.

My TL;DR Holder: teaching and learning imperatives and finance is part of it. 

Anthony Finkelstein argued, this is not question of expertise, starting point that accepting the need to change – but expect to see change in energy sources happening with technological advances. The speed and extent of change is complex feedback systems. Generally, he adopts a precautionary view. However, fossil fuel will be part of our future because of their properties – we need to deal with consumption and not production. Consumption is within a political context and the condemnation of fossil fuel is about political failure. UCL should invest according to regulatory aspects. Ownership of asset can be used to exert influence, the concern is about research and UCL strategy – it’s hypocritical to use money from fossil fuel companies for research, but not to invest in them – it sends the wrong message. A lot of research in engineering is supported by fossil fuel companies – also raise the issue of academic freedom. It is not right to ask or to question ethics of people you disagree with. (see the full argument on Anthony’s blog)

My TL;DR Finkelstein: deal with consumption and not production, we are using a lot of funding from fossil fuel companies and there is a risk to academic freedom. 

Hugh Montgomery – fossil fuel helped humanity, but it need to stop. Energy gain to the planet is equivalent to 5 Hiroshima bombs a second. 7% will stay 100,000 years. Health impacts will be in all sort of directions – and these concern also among military bodies, or the WHO. Not only from ‘extreme left wing’ organisations. Even to reach 2C we have 27 years or even more like 19 years – if we are to act, we have to keep 2/3 of fossil fuel in the ground. There is over-exposure to stranded assets. Why divestment? it’s not ‘rabid anti-capitalist agenda’ – we should change market forces. The aim of the divestment is to force fossil fuel companies to go through transformative change. UCL should do what is right, not only the amount of money, but as a statement. The stigmatisation will be significant to fossil fuel companies.

My TL;DR Montgomery: it’s not only left wing politics, even if you are fairly conservative in outlook, this make sense. 

Jane Rendell – stated that she concerned about the environment and stood down from the Bartlett research vice-dean position because of BHP Billiton funding. Leading on ethics and the built environment in the Bartlett. From her view, investment in fossil fuel is not compatible with UCL research strategy of dealing with judicious application of knowledge to improve humanity. The investment is incompatible with its own ethical guidelines and its environmental strategy. It’s also incompatible with UCL research itself about the need to leave fossil fuel in the ground. The most profound change will come from breaking down practices of finance – it’s not acceptable for fund manager to hide behind claims to ignore their responsibility to everyone else. The only argument is for shareholder engagement, and there is no evidence for it – as well as Porritt declared recently about the uselessness of engagement.

My TL;DR Rendell: incompatibility with UCL policies, and there is no point in engagement. 

Alan Penn – Universities should concentrate in their place in society – relatively new institutions and important in generation of knowledge and passing it to future generation. The ability to critically question the world. We are all invested in this companies – we benefiting from tax from North Sea oil, pension funds. Arguing that money is just transactional property and therefore doesn’t hold value. Arguing that people should invest and force companies to change through engagement.

My TL;DR Penn: don’t mix money with values, and if you want change, buy controlling stake in shares.

After the discussion (with Anthony Finkelstein having to defend his position more than anyone else), there was more support for divestment, although most of the room started from that point of view.

Finally, my view – I’ve started my university studies in October 1991, and as I was getting interested in environment and society issues in my second year of combined Computer Science and Geography degree, the Earth Summit in Rio (June 1992) was all the rage. The people who taught me have been in the summit – that also explains how I got interested in Principle 10 of the Rio declaration which is central to my work. This biographical note is to point that Earth Summit was also the starting point for the Framework Convention on Climate Change, which opens with

The Parties to this Convention,
Acknowledging that change in the Earth’s climate and its adverse effects are a common concern of humankind,
Concerned that human activities have been substantially increasing the atmospheric concentrations of greenhouse gases, that these increases enhance the natural greenhouse effect, and that this will result on average in an additional warming of the Earth’s surface and atmosphere and may adversely affect natural ecosystems and humankind …

So for the past 23 years, I’ve been watching from the sidelines how the decision makers are just not getting to the heart of the matter, nor acting although they are told about the urgency. The science was clear then. If the actions of government and industries started in 1992 we could have all been well on the route for sustainability (at least in energy consumption). It was absolutely clear that the necessary technologies are already around. I therefore find the argument of shareholders engagement as unrealistic at this stage, nor do I see the link between investment, where you don’t have control over the actions of the company, and careful decision on which research project to carry out in collaboration and under which conditions. This why I have supported the call to UCL to divest.


I still need to find the time to write the academic paper to follow my blog post about the role of my research area in fossil fuel exploration

Collective Behavior of In-group Favoritism

We just had a paper accepted in Advances in Complex Systems entitled "The Effect of In-group Favoritism on the Collective Behavior of Individuals' Opinions." In the paper we develop and an agent-based model to explore how individuals interact and how more  collective behaviors emerge (e.g. reaching a consensus or the spreading of opinions). The abstract of the paper is as follows:

Empirical findings from social psychology show that sometimes people show favoritism toward in-group members in order to reach a global consensus, even against individuals' own preferences (e.g., altruistically or deontically). Here we integrate ideas and findings on in-group favoritism, opinion dynamics, and radicalization using an agent-based model entitled cooperative bounded confidence (CBC). We investigate the interplay of homophily, rejection, and in-group cooperation drivers on the formation of opinion clusters and the emergence of extremist, radical opinions. Our model is the first to explicitly explore the effect of in-group favoritism on the macro-level, collective behavior of opinions. We compare our model against the two-dimentional bounded confidence model with rejection mechanism, proposed by Huet et al. (2008), and find that the number of opinion clusters and extremists is reduced in our model. Moreover, results show that group influence can never dominate homophilous and rejecting encounters in the process of opinion cluster formation. We conclude by discussing implications of our model for research on collective behavior of opinions emerging from individuals' interaction. 
 Keywords: Opinion dynamics; in-group favoritism; homophily; radicalization; extremism.

Full reference:
Alizadeh, M., Cioffi-Revilla, C. and Crooks, A.T. (2015), The Effect of In-group Favoritism on the Collective Behavior of Individuals' Opinions, Advances in Complex Systems. DOI: 10.1142/S0219525915500022 (pdf)
The code for the model is available from here.

Postgraduate Bursary Eligibility Checker


View the site here.

HEFCE have released funding prior to a full loan system being introduced that provides bursaries to students wishing to pursue postgraduate study. The ways in which this scheme are being operated by the different university are diverse, however, a number of universities include both the index of multiple deprivation and the POLAR classification of young participation rates within their metrics.

The website I built aims to help simplify the process of searching these data. There are things that could be improved on this site, however, all the code is provided on github.

Shaun in the City


goldenshaunA fan of Shaun the Sheep? Well, you are going to be seeing plenty of him in London over the next couple of months. On Saturday, 50 fibreglass Shaun the Sheep statues will be placed in various locations in central London – for “Shaun in the City”. 45 of the 50 are featured on the map above (click it for a larger version). The map shows four trails of 3-5km where you can visit several in quick succession. Each statue is painted differently – from a solid gold-painted one in Canary Wharf, to one dressed as a beefeater. Above is part of the official map and there is also a full list of the designs. Some of the sheep, marked with a diamond, are indoors.

In late May, the exhibition packs up for a move to Bristol, and then in the autumn they’ll all be back in Covent Garden for a weekend, to be auctioned off. If this all sounds a little familiar, it follows on from Cow Parade, Elephant Parade, a set of kangaroos, Gromits (in Bristol), giant books and miniature Boris Buses, the last of which have just had their closing auction.

We like the official map which we have reproduced above. It’s an attractive, green-tinted custom-made map from OpenStreetMap data, with vignettes of key tourist sites in central London, and largely clutter-free, with just the key streets and tube and railway stations named, as well as the walking routes and the sheep locations themselves. The cartographer has taken care to curve names on wiggly streets, and add a watercolour effect to soften some of the lines. It’s great seeing attractive, boutique maps like this created for these sorts of events, which will be hugely popular with Londoners and tourists alike, and also great a great use of the OpenStreetMap dataset that contributors in London have carefully customised over the years. Now I can go and hunt Shauns on a map I helped create!

Map obtained from the Shaun in the City website. There is also an iPhone and Android map.

OS Open

Ordnance Survey have this week released four new additions to their Open Data product suite. The four, which were announced earlier this month, are collectively branded as OS Open and include OS Open Map Local, which, like Vector Map District (VMD), is a vector dataset containing files for various feature types, such as building polygons and railway stations. The resolution of the buildings in particular is much greater than VMD – surprisingly good, in fact. I had expected the data to be similar in resolution to the (rasterised) OS StreetView but it turns out it’s even more detailed than that. The specimen resolution for OS Open Map Local is 1:3000, which is really quite zoomed in. Two new files in OS Open Map Local are “Important Buildings” (universities, hospitals etc) and “Functional Areas” which outline the land containing such important buildings.

Above: Comparing the building polygon detail in the older Vector Map District (left) and the brand new OS Open Map Local (right). The new data is clearly much higher resolution, however one anomoly is that roads going under buildings no longer break the buildings – note the wiggly road in the centre of the left sample, Malet Place, which runs through the university and under a building, doesn’t appear in full on the right. The data is Crown Copyright and Database right OS, 2015.

The other three new products, under the OS Open banner, are OS Open Names, OS Open Rivers and OS Open Roads. The latter two are topological datasets – that is, they are connected node networks, which allow routing to be calculated. OS Open Names is a detailed gazetteer. These latter three products are great as an “official”, “complete” specialised dataset, but they have good equivalents on the OpenStreetMap project. OS Open Map Local is different – it offers spatial data that is generally much higher in accuracy than most building shapes already on OpenStreetMap, including inward facing walls of buildings which are not visible from the street – and so difficult for the amateur mapper to spot. As such, it is a compelling addition to the open data landscape of Great Britain.

The OS also confirmed last week the location for its new Innovation Hub. It is indeed a mile from King’s Cross – specifically, it’s in Clerkenwell, and the hub will be sharing space with the Future Cities Catapult. Conveniently the new space has a presentation theatre and the May Geomob will be taking place there.

OpenStreetMap in GIScience – Experiences, Research, and Applications

OSM in GIScience

A new book out about OpenStreetMap and Geographic Information Science. The book, which was edited by Jamal Jokar Arsanjani, Alexander Zipf, Peter Mooney, Marco Helbich  is “OpenStreetMap in GISciene : Experiences, Research, and applications” contains 16 chapters on different aspects of OpenStreetMap in GIScience including 1) Data Management and Quality, 2) Social Context, 3) Network Modeling and Routing, and 4) Land Management and Urban Form.

I’ve contributed a preface to the book, titled “OpenStreetMap Studies and Volunteered Geographical Information” which, unlike the rest of the book, is accessible freely.

Modelling: Big Questions Remain


Recently I was asked to speculate on what strides had been made in urban and transport modelling during the last 20 years and what did I think models would evolve to in the next 20. The current editorial in EPB summarises my thinking. In many senses, this was prompted by the oft-quoted sentiment that agent-based models of transport which build on many developments in the last decades including activity time budgeting, discrete choice and the ability of computers to handle many many objects through rapid computation, have not made the world better, but produced much inferior performances than earlier more aggregative model structures. For a while there has been the sneaking suspicion that aggregate models with all their limits in terms of representation, somehow generate more realistic predictions that their micro-dynamic equivalent, Of course there can be no true test as these model types are so different. However what is interesting is whether we can generalise in any way from the widest possible model experiences: as we add more detail and attempt to explain more, all other things being equal, are we more likely to get poorer or better predictions from comparable models? The implications is poorer although the jury is out because the evidence has rarely been assembled. This question remains unresolved, and probably will do so.

To an extent it might be logically plausible to show that aggregate models might perform better if the strong structural constraints that determine how aggregate populations travel are more difficult to represent or rather are more difficult to emerge as the product of many travel decisions within micro simulation models. But all of this involves incredibly well defined controlled experimentation and given the exigencies of the very different situations in which different models are built, it may well be impossible to come to any definitive conclusion in this regard. Moreover the whole question of what is good prediction anyway is raised in this debate which is more about models and science in human affairs than about specific types of model. Yet at the end of the day, we still have to choose between different models and different predictions and learn to live with these tensions that are endemic in our field. The bigger question I think is whether or not our world is becoming more unpredictable or rather more uncertain one might say for this does and will have important implications for modelling. I have written an editorial about all this in the current edition of Environment and Planning B which can download here.

Driving Spatial Analysis Beyond the Limits of Traditional Storage

My presentation on the “Conference on Advanced Spatial Modelling and Analysis” consisted in some thoughts regarding Big Spatial Data, and how to handle them in terms of modern technologies.

It was great to see such a motivated crowd from all generations, and to get to know the research developed by CEG, in topics such as Agent Based Modelling and Neural Networks. It was also great to talk again to Arnaud Banos, from the Complex System Institute of Paris Ile-de-France (ISC-PIF).


New challenges – and opportunities – for transport modellers. What Modelling … – TransportXtra


New challenges – and opportunities – for transport modellers. What Modelling ...
Keynote speaker Dr Ed Manley from the Centre for Advanced Spatial Analysis (CASA) at University College London, remains positive, if cautious. “Many of the recent applications of Big Transportation Data have failed to live up to the much-hyped ...

and more »

New challenges – and opportunities – for transport modellers. What Modelling … – TransportXtra


New challenges – and opportunities – for transport modellers. What Modelling ...
Keynote speaker Dr Ed Manley from the Centre for Advanced Spatial Analysis (CASA) at University College London, remains positive, if cautious. “Many of the recent applications of Big Transportation Data have failed to live up to the much-hyped ...

and more »

New challenges – and opportunities – for transport modellers. What Modelling … – TransportXtra


New challenges – and opportunities – for transport modellers. What Modelling ...
Keynote speaker Dr Ed Manley from the Centre for Advanced Spatial Analysis (CASA) at University College London, remains positive, if cautious. “Many of the recent applications of Big Transportation Data have failed to live up to the much-hyped ...

and more »

New challenges – and opportunities – for transport modellers. What Modelling … – TransportXtra


New challenges – and opportunities – for transport modellers. What Modelling ...
Keynote speaker Dr Ed Manley from the Centre for Advanced Spatial Analysis (CASA) at University College London, remains positive, if cautious. “Many of the recent applications of Big Transportation Data have failed to live up to the much-hyped ...

and more »

New challenges – and opportunities – for transport modellers. What Modelling … – TransportXtra


New challenges – and opportunities – for transport modellers. What Modelling ...
Keynote speaker Dr Ed Manley from the Centre for Advanced Spatial Analysis (CASA) at University College London, remains positive, if cautious. “Many of the recent applications of Big Transportation Data have failed to live up to the much-hyped ...

House Prices – A Borough Cartogram


We really like this simple borough-by-borough cartogram map of the median house price in London. The map appears on the London data store but is otherwise uncredited. However, it is the first “in the wild” map that I’ve seen based on the prototype London Squared Map concept, produced by After The Flood and the Future Cities Catapult. The data comes from Land Registry price-paid data. The idea reduces each London borough (which have approximately equal populations, but are generally smaller in area in the inner city) to a square, then arranges them in the approximately correct location in relation to each other – a cartogram The Thames is added as a cartographical stroke as it the standard referencing object for London and helps make the cartogram more normal.

The numbers may be of surprise to some – a £500,000+ “average house price” figure has been popular in the press recently, because it sounds like a huge number and so worth shouting about – but it’s worth bearing in mind that this is a simple average. The median figures used in the graphic here are a better measure as 50% of houses are below the figure and 50% above – the skew caused by a few extremely high-priced houses doesn’t affect the median. There is no upper limit for how much you can sell a house for, but the lower limit is obviously £0. The graphic shows that in some of London’s boroughs, the figure is (relatively) low – and remember, 50% of properties in such boroughs sold for lower still.

We like the simple, “grid of squares” concept and the addition of the Thames. Cartograms are hard to produce in a way that makes them familiar to an audience familiar with Google Maps, but with this concept, that challenge may have been met. The green to red ramp is also simple and effective for drawing the eye both to the very cheap boroughs (Barking) and the ridiculously expensive (Kensington & Chelsea). As such, the map does a good job of mapping the data effectively. Mapping London has featured house price maps a couple of times before, we think this map is an excellent addition to the maps already out there.

Found on the London Data Store via a Google search for London house price data.

Taking the Scenic Route – Quantitatively?

A friend forwarded me this article which discusses this paper by researchers at the Yahoo Labs offices in Barcelona and the University of Turin. The basic idea is that they crowdsourced prettiness of places in central London, via either/or pairs photographs, to build up a field of attractiveness, then adjusted a router based on this map, to divert people along prettier, happier or quieter routes from A to B, comparing them with the shortest pedestrian routes. The data was augmented with Flickr photographs with associated locations and appropriate locations. and The article that featured this paper walked the routes and gives some commentary on the success.

Quantitatively building attractive routes is a great idea and one which is only possible with large amounts of user-submitted data – hence the photos. It reminds me of CycleStreets, whose journey planner, for cyclists, not only picks the quickest route, but adds in a quieter (and “best of both worlds”) alternative. Judging locations by their attractiveness also made me think of the (soon to be retired) ScenicOrNot project from MySociety which covered the whole of the UK, but at a much less fine-grained scale – and without the either/or normalisation.

In the particular example that the paper uses, the routes are calculated from Euston Square Station, which happens to be just around the corner from work here, to the Tate Modern gallery. It’s a little over 2 miles by the fastest route, and the alternatives calculated are only a little longer:
Above: Figure from http://dx.doi.org/10.1145/2631775.2631799

I really like the concept and hope it gets taken further – for more places and more cities. However, I would contend that local knowledge, for now, still wins the day. The scenic route misses out the Millennium Bridge which is surely one of the most scenic spots in all of London with its framed views to St Paul’s Cathedral and the Tate Modern itself. The quiet route does go this way, but the route is far from quiet when you consider the hordes of tourists normally near the cathedral and on the bridge. The pretty route goes down Kingsway which is a pretty ugly, heavily trafficked route, ignoring the nearby Lincoln Inns Fields, which is lovely. I think that the following, manually curated 3.0 mile route wins out as a much more beautiful route than the algorithmically calculated one:


Highlights include:

  • Walking through UCL’s Front Quad, through the university campus and down Malet Place.
  • Walking through the Great Hall of the British Museum
  • Bloomsbury Square garden and Lincoln’s Inn Fields
  • Chancery Lane
  • New Street Square (modern but attractive)
  • The statue of Hodge, Dr Johnson’s Cat
  • Wine Office Court, with the Ye Olde Cheshire Cheese Pub
  • Fleet Street and Ludgate Hill, with the famous view to St Pauls
  • The vista from St Paul’s Cathedral, across the Millennium Bridge to the Tate Modern.

Maps in this article are © Google Maps.

Half a Million Journeys on the London Underground

This animation from Will Gallia shows over 500,000 individual journeys on the London Underground network, it’s a 5% sample of Oyster-card journeys during a week back in 2009. Will has taken the individual origin/destination data and applied a routing algorithm to determine the likely route. I don’t think it’s perfect (the Underground flows north to London Bridge in the morning peak look a little low) but you can still clearly see people streaming inwards in the morning, and back outwards in the evening. You can also see comparatively underused parts of the network – the Hainault loop of the Central line in the top-right for instance, and individual bursts of travellers appearing in Chesham and Amersham in the far top-left (the trains there are quite infrequent).

A key innovation is transplanting these journeys back onto the “official” (and non-geographical) Beck-style tube map, so that it looks like the regular tube map that everyone knows, but “buzzing” with – and indeed created solely by – people. His animation slows the clock at one point and “flies in” to part of the map for a more detailed look. Each dot is a single person, undergoing their journey at that moment of the day.

I like the simplicity of the idea – one person, one coloured dot. The directional flows therefore stand out, particularly at the start and end of the day when the network is quieter. During the rush hours it’s pretty intense – but then, tube travel is pretty intense then too! The density of the dots is sufficient that the map itself is fully recreated by the people moving along the network.

Will has used advanced techniques, using openFrameworks, to allow for thousands of journeys being animated simultaneously. He’s open-sourced his code, you can see/download it on GitHub in two parts, here and here.

Link directly to the video.
Link to more information about the project.


Terry Pratchett: “No one is actually dead until the ripples they cause in the … – Little Atoms

Terry Pratchett: “No one is actually dead until the ripples they cause in the ...
Little Atoms
The first time I met Terry Pratchett I was in my early teens: he patiently signed every volume of my complete Discworld collection. When I met him years later in my professional capacity at the British Humanist Association, he didn't recall that ...

and more »

Volunteer computing, engagement and enthusiasm

This post is about perceptions, engagement and the important of ‘participant-observation‘ approach in citizen science research.IBM World Community Grid

It start with a perception about volunteer computing. The act of participating in scientific project by downloading and installing software that will utilise unused processing cycles of your computer is, for me, part of citizen science. However, in different talks and conversations I have noticed many people dismiss it as ‘not real citizen science’. I suspect that this is because of the assumption that the engagement of the participant is very low – just downloading a piece of software and not much beyond.

Until few weeks ago, I was arguing that there are many participants who are much more engaged – joining teams, helping others, attending webinars – and quietly accepting that it might be difficult to justify that people who ‘just download software’ are active citizen scientists.

Until this:

A bit of background – the day after the first Citizen Cyberscience Summit in 2010, I’ve joined IBM World Community Grid, as a way to experience volunteer computing on my work desktop, laptops, and later on my smartphone, while contributing the unused processing cycles to scientific projects. Out of over 378,000 participants in the project, I’m in the long tail – ranking 20,585. My top contributions are for FightAIDS@Home and Computing for Clean Water.

I notice the screen saver on my computers, and pleased with seeing the IBM World Community Grid on my smartphone in the morning, knowing that it used the time since it was fully charged for some processing. I also noticed it when I reinstall a computer, or get a new one, and remember that I need to set it going. I don’t check my ranking, and I don’t log-in more than twice a year to adjust the projects that I’m contributing to. So all in all, I self-diagnosed myself to be a passive contributor in volunteer computing.

But then came the downtime of the project on the 28th February. There was an advanced message, but I’ve missed it. So looking at my computer during the afternoon of the day, I’ve noticed a message ‘No Work Available to Process’. After a while, it bothered me enough to go on and check the state of processing on the smartphone, which also didn’t process anything. Short while after that, I was searching the internet to find out what is going on with the system, and after discovering that the main site was down, I continued to look around until I found the twitter message above. Even after discovering that it is all planned, I couldn’t stop looking at the screen saver from time to time, and was relieved when processing resumed.

What surprised me about this episode was how much I cared about it. The lack of processing annoyed me  enough to spend over half an hour on discovering the reason. From the work of Hilary Geoghagen, I know about technology enthusiasm, but I didn’t expected that I would care about downtime the way I did.

This changed my view on volunteer computing – there must be more people that are engaged in a project and care about it than what usually is perceived. This is expressed in the survey the IBM run in 2013 when 15,627 people cared enough about the World Community Grid project to complete a survey. I guess that I’m not alone…

The final note is about the importance of ‘participant-observation‘. As a researcher, participatory action research is a core methodology that I’ve been using for a long while, and I advocated it to others, for example as a necessary research method for those who are researching OpenStreetMap. Participant-observation require you to get a deeper understanding about the topic that you are researching by actively participating in it, not just analysing interviews or statistics about participation. The episode above provide for me a demonstration for the importance of this methodology. For over 4 years, my participation in volunteer computing was peripheral, but eventually, it provided me with an insight that is important to my understanding of the topic and the emotional attachment of those who are participating in it.

Print Mega Sale


I am moving house soon so I want to get as many of my remaining prints to a new home before I have to leave for mine. I am therefore selling them off as cheaply as possible (UK: £16, World:£18, including postage). These prints will not be signed or numbered (as they have been in the past), but they are the same high quality paper and size (A2) as I have been selling all along here.



Centre for Spatial Demographics Research, University of Liverpool: Centre Symposium and Launch Event

Thursday 11th and Friday 12th June 2015, University of Liverpool

Research on population studies is at a major turning point. Changes not just in population dynamics (for example, in fertility and family formation and migration, and in the social, economic, demographic and ethnic characteristics of neighbourhoods), but also in the ways in which populations are measured and recorded, mean that new approaches to the study of populations are essential. Within this context, the University of Liverpool is establishing a new inter-disciplinary Centre for Spatial Demographics Research, reflecting a recent growth in expertise in spatial population studies within the University.

This is to invite you to a symposium and launch of the Centre. The event will focus on recent research and challenges in spatial demographics research. It will bring together leading researchers in spatial demographics who will present cutting-edge research and engage with key agendas in the field. Within this context, spatial demographics covers any research that falls within the broad headings of population research (quantitative and qualitative), spatial analysis, demography, epidemiology and geographical information science and this event is likely to appeal to any academics, public sector researchers and others who have an interest in spatially-referenced population data and their analysis. Six key research themes provide the focus of the event:

1. Demographic change
2. Small-area estimation
3. Migration and ethnic diversity
4. Geodemographics
5. Long-term change
6. Future opportunities

Each theme will feature talks from a Centre member, an invited external speaker who will reflect on their own work as well as wider issues about the theme, and an invited external speaker on the theme’s policy context. Confirmed speakers include: Professor Tony Champion (Newcastle University), Professor Phil Rees (University of Leeds), Alan Smith (Office for National Statistics), Professor Richard Webber (King’s College London), Professor Li-Chun Zhang (University of Southampton).

This will be an exciting event which will showcase the latest research and debate progress and problems in spatial demographics. Spaces are limited and first-come-first served, so early booking is recommended.

The event will run over two days (starting at 12.30pm on the 11th and closing at 1pm on the 12th). A buffet lunch and coffee and tea will be provided on both days. Attendance is free, but registration is essential.

To register for this event, click here.

Chris Lloyd



Luminocity3D, an “urban density and dynamics explorer”, is a interactive mapping website developed by UCL CASA’s Duncan Smith, which shows a number of urban demographics for London and other UK urban areas, using an innovative hexagonal grid and a 3D effect to emphasise values. The above view shows how the proportion of London’s “Zero Car Households” varies across the capital, with colours showing the proportion, and the height of each hexagonal area showing the residential population of that area. London’s “Zero Car” distribution can easily be seen to be radial, with only a few places beyond the inner city having noteable zero-car populations – Barking and Croydon.

A number of other census and other metrics are included, selectable via a simple dialog on the top right. I would be remiss not to mention my own project, DataShine, which has similar goals – it maps these datasets in a different way, colouring building plots rather than hexagons. What I really like about Luminocity3D’s approach however is its clarity and distinctiveness – it is easy to see the big picture at a glance and the interactive cues on the website make it a compelling tool to explore data with. An interactive chart is included with each map, comparing London’s result (normally an outlier, and easy to spot as it’s the biggest blob on the chart) with other urban areas in the UK.


Screenshots from the Luminocity3D website, which was featured in Nature this week.

Geoweb, crowdsourcing, liability and moral responsibility

Yesterday, Tenille Brown led a Twitter discussion as part of the Geothink consortium. Tenille opened with a question about liability and wrongful acts that can harm others

If you follow the discussion (search in Twitter for #geothink) you can see how it evolved and which issues were covered.

At one point, I have asked the question:

It is always intriguing and frustrating, at the same time, when a discussion on Twitter is taking its own life and many times move away from the context in which a topic was brought up originally. At the same time, this is the nature of the medium. Here are the answers that came up to this question:

You can see that the only legal expert around said that it’s a tough question, but of course, everyone else shared their (lay) view on the basis of moral judgement and their own worldview and not on legality, and that’s also valuable. The reason I brought the question was that during the discussion, we started exploring the duality in the digital technology area to ownership and responsibility – or rights and obligations. It seem that technology companies are very quick to emphasise ownership (expressed in strong intellectual property right arguments) without responsibility over the consequences of technology use (as expressed in EULAs and the general attitude towards the users). So the nub of the issue for me was about agency. Software does have agency on its own but that doesn’t mean that it absolved the human agents from responsibility over what it is doing (be it software developers or the companies).

In ethics discussions with engineering students, the cases of Ford Pinto or the Thiokol O-rings in the Discovery Shuttle disaster come up as useful examples to explore the responsibility of engineers towards their end users. Ethics exist for GIS – e.g. the code of ethics of URISA, or the material online about ethics for GIS professional and in Esri publication. Somehow, the growth of the geoweb took us backward. The degree to which awareness of ethics is internalised within a discourse of ‘move fast and break things‘, software / hardware development culture of perpetual beta, lack of duty of care, and a search for fast ‘exit’ (and therefore IBG-YBG) make me wonder about which mechanisms we need to put in place to ensure the reintroduction of strong ethical notions into the geoweb. As some of the responses to my question demonstrate, people will accept the changes in societal behaviour and view them as normal…


Ordnance Survey Open Data – The Next Level of Detail

An encouraging announcement from BIS (the Department for Business, Innovation and Skills) a few days ago regarding future Open Data products from the Ordnance Survey (press release here) and the Ordnance Survey – two pieces of good news:

  • The OS will be launching a new, detailed set of vector data as Open Data at the end of this month. They are branding it as OS OpenMap, but it looks a lot like a vector version of OS StreetView, which is already available as a raster. The key addition will be “functional polygons” which show the boundaries of school and hospital sites, and more detailed building outlines. OS Vector Map District, which is part of the existing Open Data release, is already pretty good for building outlines – it forms the core part of DataShine and this print, to name just two pieces of my work that have used the building footprints. With OpenMap, potentially both of these could benefit, and we might even get attribute information about building types. What we do get is the inclusion of unique building identifiers – potentially this could allow an crowd-sourced building classification exercise if the attribution information isn’t there? OpenMap also includes a detailed and topological (i.e. joined up) water network, and an enhanced gazetteer (placename database).
  • The other announcement relates to the establishment of an innovation hub in London – an incubator for geo-related startups. The OS are being cagey about exactly where it will be, saying just that it will be on the outskirts of the Knowledge Quarter, which is defined as being within a mile of King’s Cross. UCL’s about a mile away. So maybe it will be very close to here?

p.s. The Ordnance Survey have also recently rebranded themselves as just “OS”. Like University College London rebranding itself as “UCL” a few years ago, and ESRI calling itself Esri (and pronouncing it like a word), it will be interesting to see if it sticks. OS for me stands for “open source” and is also very close to OSM (OpenStreetMap), so possible confusion may follow. It does however mean a shorter attribution line for when I use OS data in my web maps.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Manchester – Languages and Jobs

Many of my visualisations have focused on London – there is an advantage of being in the city and surrounded by the data, which means that London is often the “default” city to map. However, I’ve created a couple of Manchester versions of my popular maps Ward Words and Ward Work. Logistics and time reasons mean that I present these as images rather than interactive websites, although I used the existing London centric website with Manchester data. A bonus is that, by presenting these as images, I can use LSOAs which are more detailed than wards – although there are too many of them for my interactive version to be very useable.

I’m only showing the top result, and the way the categories are grouped can therefore significantly influence what is shown. For example, if I grouped certain categories together, even ones which don’t appear on the map itself, then the grouped category would likely appear in many places because it would more likely be the top result. It would therefore easy to produce a version of this graphic that showed a very different emphasis.

The maps were created using open, aggregated data (QS204EW and QS606EW) from the ONS which is under the Open Government Licence, and the background map is from HERE maps. Enjoy!

Languages second-most commonly spoken in each LSOA in the Greater Manchester area (click for a larger version):
second_languages_manchester N.B. Where the second language is spoken by less than 2% of the population, I simply show it as a grey circle. LSOAs have a typical population of around 1500 so the smallest non-grey circles represent around 30 speakers of that language.

The most popular occupation by (home) LSOA (again, click for a larger version):
manchester_occupation_adorned-1I’ve used grey here for the “Sales Assistant” occupational group, as this is the dominant occupation in large urban areas.

My interactive (London only I’m afraid) version is here – change the metric on the top left for other datasets.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Seeing Red: 15 Ways the Boris Bikes of London Could be Better


I’ll be gradually tweaking this article to add/amend sources, clarifications and develop some of my arguments.

A big announcement for the “Boris Bikes” today, aka Barclays Cycle Hire. London’s bikeshare system, the second largest in the western world after Paris’s Velib and nearly five years old, will be rebranded as Santander Cycles, and the bikes with have a new, bright red branding – Santander’s corporate colour, and conveniently also London’s most famous colour. As well as the Santander logo, it looks like the “Santa Bikes” will have outlines of London’s icons – the above publicity photo showing the Tower of London and the Orbit, while another includes the Shard and Tower Bridge. A nice touch to remind people these are London’s bikes.

It’s great that London’s system can attract “big” sponsors – £7m a year with the new deal – but another document that I spotted today reveals that, despite the sponsorship, London’s system runs at a large operating loss – this is all the more puzzling because other systems can (almost) cover their operating costs – including Washington DC’s which is both similar to London’s in some ways (a good core density, same bike/dock equipment) and different (coverage into the suburbs, rider incentives); and Paris’s, which has a very different funding model, and its own set of advantages (coverage throughout the city) and disadvantages (little incentive to expand/intensify). What are they doing right that London is not?

In financial year 2013/4, London’s bikeshare had operating costs of £24.3m. Over this time period, the maximum number of bikes that were available to hire was 9471, on 26 March 2014. This data comes from TfL’s own Open Data Portal. This represents a cost of over £2500 per bike, for that year alone. If you look at it another way, each bike is typically used three times a day or ~1000 times a way, so that’s about £2.50 a journey, of which the sponsor pays about £1 and the user about £0.50. As operating costs, these don’t include the costs of buying the bikes or building the docking stations. Much of the cost is likely absorbed in repairing the bikes – London’s system is wildly successful, so each bike sees a lot of use every day, and the wear and tear is likely to be considerable. This is not helped by the manufacturers of the bikes going bust a couple of years ago – so there are no “new” ones out there to replace the older ones – New York City, which uses the same bikes, is suffering similar problems. The other big cost will be the rebalancing/redistribution activity, operating a fleet of vehicles that move bikes around.

I have no great issues with the costs of the bikes – they are a public service and the costs are likely a fraction of the costs of maintaining the other public assets of roads, buses, railway lines – but it is frustrating to see that, with the current setup of London’s system, the main beneficiaries are tourists (The Hyde Park docking stations consistently being the most popular), commuters (the docking stations around Waterloo are always popular on weekdays), and those Londoners lucky enough to live in Zone 1 and certain targeted parts of Zone 2 (south-west and east). Wouldn’t be great if all Londoners benefited from the system?

Here’s 15 ways that London’s bikeshare could be made better for Londoners (and indeed for all) – and maybe cheaper to operate too:

  1. Scrap almost all rebalancing activity. It’s very expensive (trucks, drivers, petrol), and I’m not convinced it is actually helping the system – in fact it might be making it worse. Most cycling flows in London are uni-directional – in to the centre in the morning, back out in the evening – or random (tourist activity). Both of these kinds of flows will, across a day, balance out on their own. Rebalancing disrupts these flows, removing the bikes from where they are needed later in the day (or the following morning) to address a short-term perceived imbalance that might not be real on-the-ground. Plus, when the bikes are in sitting in vans, inevitably clogged in traffic, they are of no use to anyone. Some “lightweight” rebalancing, using cycle couriers and trailer, could help with some specific small-scale “pinch points”, or responding to special events such as heavy rainfall or a sporting/music event. New York uses cyclists/trailers to help with the rebalancing.
  2. Have a “guaranteed valet” service instead, like in New York. This operates for a certain number of key docking stations at certain times of the day, and guarantees that someone can start or finish their journey there. London already has this, to a certain extent, at some stations near Waterloo, but it would be good to highlight this more and have it at other key destinations. This “static” supply/demand management would be a much better use of the time of redistribution drivers.
  3. Have “rider rewards“, like in Washington DC. Incentivise users to redistribute the bikes themselves, by allowing a free subsequent day’s credit (or free 60-minute journey extension) for journeys that start at a full docking station and end at an empty one. This would need to be designed with care to ensure “over-rebalancing”, or malicious marking of bikes as broken, was minimised. Everyone values the system in different ways, so some people benefit from a more naturally balanced system and others benefit from lower costs using it.
  4. Have more flexible user rules. Paris’s Velib has an enhanced membership “Passion” that allows free single journeys of up to 45 minutes rather than every 30 minutes. In London, you have to wait 5 minutes between hires, but most systems (Paris, Boston, New York) don’t have this “timeout” period. To stop people “guarding” bikes for additional use, an alternative could be make it a 10 minute timeout but tie it to the specific docking station (or indeed a specific bike) rather than system-wide.
  5. Adjust performance metrics. TfL (and the sponsors) measure performance of the system in certain ways, such as the time a docking station remains empty at certain times of the day. I’m not sure that these are helpful – surely the principle metric of value (along with customer service resolution) is the number of journeys per time period and/or number of distinct users per time period. If these numbers go down, over a long period, something’s wrong. The performance metrics, as they stand, maybe encouraging the unnecessary and possibly harmful rebalancing activity, increasing costs with no good gain.
  6. Remove the density rule (one docking station every ~300 metres) except in Zone 1. Having high density in the centre and low density in the suburbs works well for many systems – e.g. Bordeaux and Washington DC, because it allows the system to be accessible to a much larger population, without flooding huge areas with expensive stations/bikes. An extreme example, this docking station is several miles from its nearest neighbour, in a US city.
  7. Build a docking station outside EVERY tube station, train station and bus station inside the North/South Circular (roughly, Zones 1-3). Yes, no matter how hilly* the area is, or how little existing cycling culture it has – stop assuming how people use bikes or who uses them! Bikeshare is a “last mile” transport option and it should be thought of as part of someone’s journey across London, and as a life benefit, not as a tourist attraction. The system should also look expand into these areas iteratively rather than having a “big bang” expansion by phases. It’s crazy that most of Hackney and Islington doesn’t have the bikeshare, despite having a very high cycling population. Wouldn’t be great if people without their own bikes could be part of the “cycling cafe culture” strong in these places? For other places that have never had a cycling culture, the addition of a docking station in a prominent space might encourage some there to try cycling for the first time. (*This version of the bikes could be useful.)
  8. Annual membership (currently £90) should be split into peak and off-peak (no journey starts from 6am-10am) memberships, the former increased to £120 and the latter decreased back to £45. Unlike the buses and trains, which are always full peak and pretty busy off-peak too, there is a big peak/offpeak split in demand for the bikes. Commuters get a really good deal, as it stands. Sure, it costs more than buying a very cheap bike, but actually you aren’t buying the use of a bike – you are buying the free servicing of the bike for a year, and free distribution of “your” bike to another part of central London, if you are going out in the evening. Commuters that use the bikes day-in-day-out should pay more. Utility users who use the bike to get to the shops, are the sorts that should be targetted more, with off-peak membership
  9. A better online map of availability. The official map still doesn’t have at-a-glance availability. “Rainbow-board” type indications of availability in certain key areas of London would also be very useful. Weekday use, in particular, follows distinct and regular patterns in places.
  10. Better indication of where the nearest bikes/docks are, if you are at a full/empty docking station, i.e. a map with route indication to several docking stations nearby with availability.
  11. Better static signage of your nearest docking station. I see very few street signs pointing to the local docking station, even though they are hard-built into the ground and so generally are pretty permanent features.
  12. Move more services online, have a smaller help centre. A better view of journeys done (a personal map of journeys would be nice) and the ability to question overpayments/charges online.
  13. Encourage innovative use of the bikeshare data, via online competitions – e.g. Boston’s Hubway data visualisation competitions have had lots of great entries. These get further groups interested in the system and ways to improve it, and can produce great visuals to allow the operator/owner to demonstrate the reach and power of the system.
  14. Allow use of the system with contactless payment cards, and so integration with travelcards, daily TfL transport price caps etc. The system can’t use Oyster cards because of the need to have an ability to take a “block payment” charge for non-return of the bikes. But with contactless payment, this could be achieved. The cost of upgrading the docking points to take cards would be high, but such docking points are available and in use in many of the newer US systems that use the same technology.
  15. Requirement that all new housing developments above a certain size, in say Zone 1-3 London, including a docking station with at least one docking point per 20 residents and one new bike per 40 residents, either on their site or within 300m of their development boundary. (Update: Euan Mills mentions this is already is the case, within the current area. To clarify, I would like to see this beyond the current area, allowing an organic growth outwards and linking with the sparser tube station sites of point 7.)

London has got much right – it “went big” which is expensive but the only way to have a genuinely successful system that sees tens of thousands of journeys on most days. It also used a high-quality, rugged system that can (now) cope with the usage – again, an expensive option but absolutely necessary for it to work in the long term. It has also made much data available on the system, allowing for interesting research and increasing transparency. But it could be so much better still.

Washington DC’s systems – like London’s but profitable.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Exotic Adornments and Old Maps: The Art of Kristjana S Williams



ksw_map_smallArtist Kristjana S Williams, originally from Iceland but now based in west London, specialises in collages of vividly coloured, exotic creatures. A number of her works have included adorning such animals around the edges of old maps – often artworks in their own right – creating a distinctive “frame” around the map and perhaps harking back to the days when maps were the preserves of ocean-going explorers, discovering weird and wonderful things in round-the-world voyages.

As well as global maps, Kristjana has worked with a number of London-specific old maps, including Lundanar Kort (excerpt on the right) based on Mogg’s 1806 map we featured only very recently, Round London (above), based on a 1791 map by Paterson, Markets Royale, set upon Cary’s 1824 plan of London, and a Transport for London commission which frames a pre-Beck London Underground map (shown below). Lundanar Kort has several editions of its own, with the same basemap but a differing assortment of fantastic adornments surrounding it. The “Gull Sky” edition includes profile sketches of a number of London buildings, old and new.

There’s something very compelling about the mashup of old maps and colourful animals that is hard to pin down!


Kristjana is represented by Outline Artists and her work is available at Outline Editions. I first came across her work at the “Art Cartography” solo exhibition late last year at The Map House in Knightsbridge, an a rather wonderful map/art dealership which deserves a post of its own here sometime.


Various websites I’ve built, and mentioned here on oobrien.com from time to time, are down until Monday lunchtime, due to a major power upgrade for the building that the server is in.

This affects the following websites:

  • DataShine
  • CDRC
  • Bike Share Map
  • Tube Tongues
  • OpenOrienteeringMap (extremely degraded)
  • Some other smaller visualisations

However the following are hosted on different servers and so will remain up:

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Gravity Models Circa 1846


Once in a while along comes a wonderful piece of historical research that again illustrates that in most fields, there is little new under the sun. Andrew Odlyzko’s recent paper entitled “The forgotten discovery of gravity models and the inefficiency of early railway networks” is just such a paper. In it, he shows that it was not Carey who was the first to argue that human interactions vary directly with their mass ain inversely with the distances between – Newton’s second law of motion – but a Belgian railway engineer Henri-Guillaume Desart who in 1846 (and perhaps even before) argued that rail traffic on any new line would follow such a law. He based this on the ‘big data’ of his time, namely railway timetables and he thus jointed the debate which raged for over century as to whether new rail lines which were built point to point in straights lines with no stations between would generate more traffic than would be attracted locally if stations were clustered around big cities. This is a debate that has some resonance even today with the debate in Britain about new high speed lines such as HS2 and what stations they might connect to.

Odlyzko’s paper also notes that in 1838, a British physicist John Herapath suggested that this local law of spatial interaction for rail traffic in fact followed a negative exponential law with traffic proportional to exp(-bd) where d was the distance from some source to a station. Arguably this is an earlier discovery although it was Desart who fitted his model to data coming up remarkably with an exponent on the inverse power of the distance in the gravity model of 2.25.

Elsewhere  I have recounted the tale of the how the Lyons electronic computer team much in advance of their time, cracked the shortest route problem in the early 1950s. You can see the video of this here where they took on the problem of pricing freight on the British Railways network by breaking their big data into chunks of network which they need to compute in and out of store in solving the problem. In fact somewhere in the recesses of my mind, there is also a nagging thought that someone even earlier just after Newton’s time first applied his gravity model to human interactions. I seem to remember this was at the time of the French Physiocrats when quite clearly the input output model anticipated Leontieff by more than 150 years when Quesnay devised his Tableau Economique. Old theories of social physics seem to go back to the beginnings of natural physics and although we live in time when the modern and the contemporary swamp the past, we are gradually discovering that our human wisdom in learning to apply science in human affairs goes back to the deep past.



Citizen Science 2015 – reflections

Citizen Science Association board meeting By Jennifer ShirkThe week that passed was full of citizen science – on Tuesday and Friday the citizen Science Association held its first Board meeting, and with the Citizen Science 2015 conference on Wednesday and Thursday, and to finish it all, on Friday afternoon a short meeting of a new project, Enhancing Informal Learning Through Citizen Science explored the directions that it will take.

After such an intensive week, it takes some time to digest and think through the lessons from the many conversations, presentations and insights that I’ve been exposed to. Here are my main ‘take away’ lessons. The conference itself ended by members of the Board of the Citizen Science Association (CSA) describing their ‘take away’ in short, tweeter messages. which was then followed by other people joining in such as:

In more details, my main observations are about the citizen science CSA board - by Michalis Vitosresearch and practice community, and the commitment to inclusive and ethical practice that came up in different sessions and conversations.

It might be my own enthusiasm to the subject, but as in previous meetings and conferences about citizen science, you can feel the buzz during the event, with participants sharing their knowledge with others and building new connections. While there are already familiar faces and the joy of meeting colleagues in the field of citizen science that you already know, there are also many new people who are either exploring the field of citizen science or are active in it, but new to the community of practice around citizen science. As far as I can tell, the conference was welcoming to new participants and the poster session on the first day and the breakfast on the second day provided opportunities to create new connections. It might be because people in this field are used to talk with strangers (e.g. participants in citizen science activities), but that is an aspect that the CSA need to keep in mind to ensure that it stays an open community and not closed one.

CSA breakfast Secondly, citizen science is a young, emerging field. Many of the practitioners and researchers are in early stages of their careers, and within research institutions, the funding for the researchers is through research grants (known in academia as ‘soft money‘) as opposed to budgeted and centrally funded positions. Many practitioners are working within tight and limited government budgets. This have an implications on ensuring the funding limitations don’t stop people from publishing in the new journal ‘Citizen Science: Theory and Practice‘ or if they can’t attend the conference they can find information about it in blogs, see a repository of posters that were displayed in the conference or read curated social media outputs about it. More actively, as the CSA done for this meeting, funding should be provided to allow early career researchers to attend.

Third, there is clearly a global community of researchers and practitioners committed to citizen science. Yet, the support and network that they need must be local. The point above about budget limitations reinforce the need for local networks and need for meeting opportunities that are not to expensive to attend and participate in. For me, the value of face to face meetings and discussions is unquestionable (and I would hope that future conferences will be over 3 days to provide more time), and balancing travel, accommodation and budget constraints with the creation of a community of practice is something to grapple with over the coming years. Having a global community and a local one at the same time is one of the challenges of the Citizen Science Association.

Katherine M - Ethics PanelFinally, the conference hosted plenty of conversations and discussions about the ethical and inclusive aspects of citizen science (hence my take away above). From discussions about what sort of citizenship is embedded in citizen science, to the need to think carefully on who is impacted through citizen science activities. A tension that came throughout these discussions is the value of expertise – especially scientific – within an activity where citizen scientists are treated respectfully and their knowledge and contributions appreciated. The tension is emphasised by the hierarchical nature of the academic world, with the ‘flatter’ or ‘self organising’ hierarchies that emerge in citizen science projects. I would guess that it is part of what Heidi Ballard calls ‘Questions that Won’t Go Away’ and will need to be negotiated in different projects. What is clear is that even in contributory projects, where the scientists setting the project question, the protocol, and asking participants to help in data collection of analysis, simple hierarchical thinking of the scientist as expert and the participants as ‘laity’ is going to be challenged.

If you want to see other reflections on Citizen Science 2015 conference, see the conference previews from  and Caren Cooper, and post conference reports from Monica Peters, which provides a newcomer view from a New Zealnad, while Kelsey McCutcheon provide an American oneSarah West for an experienced citizen science researcher view, and finally, from the Schoodic Institute, who are the sponsors and hosts of the CSA.

The latest outputs from researchers, alumni and friends at UCL CASA