Walk this Way

We recently had published in ISPRS International Journal of Geo-Information a paper entitled "Walk this Way: Improving Pedestrian Agent-Based Models through Scene Activity Analysis". In the paper we explore how new data can help inform our agent-based models. Specifically, pedestrian modeling which has been traditionally faced with the challenge of collecting data to calibrate and validate such models of pedestrian movement. Below is the abstract for the paper:
Pedestrian movement is woven into the fabric of urban regions. With more people living in cities than ever before, there is an increased need to understand and model how pedestrians utilize and move through space for a variety of applications, ranging from urban planning and architecture to security. Pedestrian modeling has been traditionally faced with the challenge of collecting data to calibrate and validate such models of pedestrian movement. With the increased availability of mobility datasets from video surveillance and enhanced geolocation capabilities in consumer mobile devices we are now presented with the opportunity to change the way we build pedestrian models. Within this paper we explore the potential that such information offers for the improvement of agent-based pedestrian models. We introduce a Scene- and Activity-Aware Agent-Based Model (SA 2 -ABM), a method for harvesting scene activity information in the form of spatiotemporal trajectories, and incorporate this information into our models. In order to assess and evaluate the improvement offered by such information, we carry out a range of experiments using real-world datasets. We demonstrate that the use of real scene information allows us to better inform our model and enhance its predictive capabilities.

Keywords: pedestrian modeling; pedestrian tracking; activity monitoring; spatiotemporal trajectories; agent-based modeling
As with many of our models, the source code of the model can be downloaded from here. To give a sense of the model, the movie below shows how agents traverse the scene.




Full Reference: 
Crooks, A.T., Croitoru, A., Lu, X., Wise, S., Irvine, J. and Stefanidis, A. (2015),  Walk this Way: Improving Pedestrian Agent-Based Models through Scene Activity AnalysisISPRS International Journal of Geo-Information, 4(3): 1627-1656. (pdf)

Job opportunity: Research fellow demographic change and world population

Dear colleagues,

Please follow this link to see a job advert from the Federal Institute for Population Research.

The research group Demographic Change and World Population at the Federal Institute for Population Research is currently recruiting for a research fellow (48 month fixed term contract, full time). The research position requires a master degree (or equivalent); a PhD (or equivalent) is appreciated. German language skills are required.

The Federal Institute for Population Research is a research institute within the portfolio of the Federal Ministry of the Interior and based in Wiesbaden (Germany). The remuneration is according to the German public service scale (salary group TVöD E14).

Please note that the deadline for applications is September 24th 2015.

Yours sincerely

Frank Swiaczny

Managing Editor
CPoS | Comparative Population Studies
www.comparativepopulationstudies.de

Data and the City workshop (day 2)

The second day of the Data and City Workshop (here are the notes from day 1) started with the session Data Models and the City.

Pouria Amirian started with Service Oriented Design and Polyglot Binding for Efficient Sharing and Analysing of Data in Cities. The starting point is that management of the city need data, and therefore technologies to handle data are necessary. In traditional pipeline, we start from sources, then using tools to move them to data warehouse, and then doing the analytics. The problems in the traditional approach is the size of data – the management of the data warehouse is very difficult, and need to deal with real-time data that need to answer very fast and finally new data types – from sensors, social media and cloud-born data that is happening outside the organisation. Therefore, it is imperative to stop moving data around but analyse them where they are. Big Data technologies aim to resolve these issues – e.g. from the development of Google distributed file system that led to Hadoop to similar technologies. Big Data relate to the technologies that are being used to manage and analyse it. The stack for managing big data include now over 40 projects to support different aspects of the governance, data management, analysis etc. Data Science is including many areas: statistics, machine learning, visualisation and so on – and no one expert can know all these areas (such expert exist as much as unicorns exist). There is interaction between data science researchers and domain experts and that is necessary for ensuring reasonable analysis. In the city context, these technologies can be used for different purposes – for example deciding on the allocation of bikes in the city using real-time information that include social media (Barcelona). We can think of data scientists as active actors, but there are also opportunities for citizen data scientists using tools and technologies to perform the analysis. Citizen data scientists need data and tools – such as visual analysis language (AzureML) that allow them to create models graphically and set a process in motion. Access to data is required to facilitate finding the data and accessing it – interoperability is important. Service oriented architecture (which use web services) is an enabling technology for this, and the current Open Geospatial Consortium (OGC) standards require some further development and changes to make them relevant to this environment. Different services can provided to different users with different needs [comment: but that increase in maintenance and complexity]. No single stack provides all the needs.

Next Mike Batty talked about Data about Cities: Redefining Big, Recasting Small (his paper is available here) – exploring how Big Data was always there: locations can be seen are bundles of interactions – flows in systems. However, visualisation of flows is very difficult, and make it challenging to understand the results, and check them. The core issue is that in N locations there are N^2 interactions, and the exponential growth with the growth of N is a continuing challenge in understanding and managing cities. In 1964, Brian Berry suggested a system on location, attributes and time – but temporal dimension was suppressed for a long time. With Big Data, the temporal dimension is becoming very important. An example of how understanding data is difficult is demonstrated with understanding travel flows – the more regions are included, the bigger the interaction matrix, but it is then difficult to show and make sense of all these interactions. Even trying to create scatter plots is complex and not helping to reveal much.

The final talk was from Jo Walsh titled Putting Out Data Fires; life with the OpenStreetMap Data Working Group (DWG) Jo noted that she’s talking from a position of volunteer in OSM, and recall that 10 years ago she gave a talk about technological determinism but not completely a utopian picture about cities , in which OpenStreetMap (OSM) was considered as part of the picture. Now, in order to review the current state of OSM activities relevant for her talk, she asked in the OSM mailing list for examples. She also highlighted that OSM is big, but it’s not Big Data- it can still fit to one PostGres installation. There is no anonymity in the system – you can find quite a lot about people from their activity and that is built into the system. There are all sort of projects that demonstrate how OSM data is relevant to cities – such as OSM building to create 3D building from the database, or use OSM in 3D modelling data such as DTM. OSM provide support for editing in the browser or with offline editor (JOSM). Importantly it’s not only a map, but OSM is also a database (like the new OSi database) – as can be shawn by running searches on the database from web interface. There are unexpected projects, such as custom clothing from maps, or Dressmap. More serious surprises are projects like the humanitarian OSM team and the Missing Maps projects – there are issues with the quality of the data, but also in the fact that mapping is imposed on an area that is not mapped from the outside, and some elements of colonial thinking in it (see Gwilym Eddes critique) . The InaSAFE project is an example of disaster modeling with OSM. In Poland, they extend the model to mark details of road areas and other details. All these are demonstrating that OSM is getting close to the next level of using geographic information, and there are current experimentations with it. Projects such as UTC of Mappa Marcia is linking OSM to transport simulations. Another activity is the use of historical maps – townland.ie .
One of the roles that Jo play in OSM is part of the data working group, and she joined it following a discussion about diversity in OSM within the community. The DWG need some help, and their role is geodata thought police/Janitorial judicial service/social work arm of the volunteer fire force. DWG clean up messy imports, deal with vandalisms, but also deal with dispute resolutions. They are similar to volunteer fire service when something happens and you can see how the sys admins sparking into action to deal with an emerging issue. Example, someone from Ozbekistan saying that they found corruption with some new information, so you need to find out the changeset, asking people to annotate more, say what they are changing and why. OSM is self policing and self regulating – but different people have different ideas about what they are doing. For example, different groups see the view of what they want to do. There are also clashes between armchair mapping and surveying mappers – a discussion between someone who is doing things remotely, and the local person say that know the road and asking to change the editing of classification. DWG doesn’t have a legal basis, and some issues come up because of the global cases – so for example translated names that does not reflect local practices. There are tensions between commercial actors that do work on OSM compared to a normal volunteer mappers. OSM doesn’t have privileges over other users – so the DWG is recognised by the community and gathering authority through consensus.

The discussion that follows this session explored examples of OSM, there are conflicted areas such as Crimea nad other contested territories. Pouria explained that distributed computing in the current models, there are data nodes, and keeping the data static, but transferring the code instead of data. There is a growing bottleneck in network latency due to the amount of data. There are hierarchy of packaging system that you need to use in order to work with distributed web system, so tightening up code is an issue.
Rob – there are limited of Big Data such as hardware and software, as well as the analytics of information. The limits in which you can foster community when the size is very large and the organisation is managed by volunteers. Mike – the quality of big data is rather different in terms of its problem from traditional data, so while things are automated, making sense of it is difficult – e.g. tap in but without tap out in the Oyster data. The bigger the dataset, there might be bigger issues with it. The level of knowledge that we get is heterogeneity in time and transfer the focus to the routine. But evidence is important to policy making and making cases. Martijn – how to move the technical systems to allow the move to focal community practice? Mike – the transport modelling is based on promoting digital technology use by the funders, and it can be done for a specific place, and the question is who are the users? There is no clear view of who they are and there is wide variety, different users playing different roles – first, ‘policy analysts’ are the first users of models – they are domain experts who advise policy people. less thinking of informed citizens. How people react to big infrastructure projects – the articulations of the policy is different from what is coming out of the models. there are projects who got open and closed mandate. Jo – OSM got a tradition of mapping parties are bringing people together, and it need a critical mass already there – and how to bootstrap this process, such as how to support a single mapper in Houston, Texas. For cases of companies using the data while local people used historical information and created conflict in the way that people use them. There are cases that the tension is going very high but it does need negotiation. Rob – issues about data citizens and digital citizenship concepts. Jo – in terms of community governance, the OSM foundation is very hands off, and there isn’t detailed process for dealing with corporate employees who are mapping in their job. Evelyn – the conventions are matters of dispute and negotiation between participants. The conventions are being challenged all the time. One of the challenges of dealing with citizenship is to challenge the boundaries and protocols that go beyond the state. Retain the term to separate it from the subject.

The last session in the workshop focused on Data Issues: surveillance and crime 

David Wood talked about Smart City, Surveillance City: human flourishing in a data-driven urban world. The consideration is of the smart cities as an archetype of the surveillance society. Especially trying to think because it’s part of Surveillance Society, so one way to deal with it is to consider resistance and abolishing it to allow human flourishing. His interest is in rights – beyond privacy. What is that we really want for human being in this data driven environment? We want all to flourish, and that mean starting from the most marginalised, at the bottom of the social order. The idea of flourishing is coming from Spinoza and also Luciano Floridi – his anti-enthropic information principle. Starting with the smart cities – business and government are dependent on large quant of data, and increase surveillance. Social Science ignore that these technology provide the ground for social life. The smart city concept include multiple visions, for example, a European vision that is about government first – how to make good government in cities, with technology as part of a wider whole. The US approach is about how can we use information management for complex urban systems? this rely on other technologies – pervasive computing, IoT and things that are weaved into the fabric of life. The third vision is Smart Security vision – technology used in order to control urban terrain, with use of military techniques to be used in cities (also used in war zones), for example biometrics systems for refugees in Afghanistan which is also for control and provision of services. The history going back to cybernetics and policing initiatives from the colonial era. The visions overlap – security is not overtly about it (apart from military actors). Smart Cities are inevitably surveillance cities – a collection of data for purposeful control of population. Specific concerns of researchers – is the targeting of people that fit a profile of a certain kind of people, aggregation of private data for profit on the expense of those that are involved. The critique of surveillance is the issue of sorting, unfair treatment of people etc. Beyond that – as discussed in the special issue on surveillance and empowerment– there are positive potentials. Many of these systems have a role for the common good. Need to think about the city within neoliberal capitalism, separate people in space along specific lines and areas, from borders to building. Trying to make the city into a tamed zone – but the danger parts of city life are also source for opportunities and creativity. The smart city fit well to this aspect – stopping the city from being disorderly. There is a paper from 1995 critique pervasive computing as surveillance and reduce the distance between us and things, the more the world become a surveillance device and stop us from acting on it politically. In many of the visions of the human in pervasive computing is actually marginalised. This is still the case. There are opportunities for social empowerment, say to allow elderly to move to areas that they stop exploring, or use it to overcome disability. Participation, however, is flawed – who can participate in what, where and how? additional questions are that participation in highly technical people is limited to a very small group, participation can also become instrumental – ‘sensors on legs’. The smart city could enable to discover the beach under the pavement (a concept from the situationists) – and some are being hardened. The problem is corporate ‘wall garden’ systems and we need to remember that we might need to bring them down.

Next Francisco Klauser talked about Michel Foucault and the smart city: power dynamics inherent in contemporary governing through code. Interested in power dynamics of governing through data. Taking from Foucault the concept of understanding how we can explain power put into actions. Also thinking about different modes of power: Referentiality – how security relate to governing? Normativity – looking at what is the norm and where it is came from? Spatiality – how discipline and security is spread across space. Discipline is how to impose model of behaviour on others (panopticon). Security work in another way – it is free things up within the limits. So the two modes work together. Power start from the study of given reality. Data is about the management of flows. The specific relevance to data in cities is done by looking at refrigerated warehouses that are used within the framework of smart grid to balance energy consumption – storing and releasing energy that is preserved in them. The whole warehouse has been objectified and quantified – down to specific product and opening and closing doors. He see the core of the control through connections, processes and flows. Think of liquid surveillance – beyond the human.

Finally, Teresa Scassa explored Crime Data and Analytics: Accounting for Crime in the City. Crime data is used in planning, allocation of resources, public policy making – broad range of uses. Part of oppositional social justice narratives, and it is an artefact of the interaction of citizen and state, as understood and recorded by the agents of the state operating within particular institutional cultures. Looking at crime statistics that are provided to the public as open data – derived from police files under some guidelines, and also emergency call data which made from calls to the policy to provide crime maps. The data that use in visualisation about the city is not the same data that is used for official crime statistics. There are limits to the data – institutional factors: it measure the performance of the police, not crime. It’s how police are doing their job – and there are lots of acts of ‘massaging’ the data by those that are observed. The stats are manipulated to produce the results that are requested. The police are the sensors, and there is unreporting of crime according to the opinion of police person – e.g. sexual assault, and also the privatisation of policing who don’t report. Crime maps are offered by private sector companies that sell analytics, and then provide public facing option – the narrative is controlled – what will be shared and how. Crime maps are declared as ‘public awareness or civic engagement’ but not transparency or accountability. Focus on property offence and not white collar one. There are ‘alternalytics’ – using other sources, such as victimisation survey, legislation, data from hospital, sexual assault crisis centres, and crowdsourcing. Example of the reporting bottom up is harrassmap to report cases that started in Egypt. Legal questions are how relationship between private and public sector data affect ownership, access and control. Another one is how the state structure affect data comparability and interoperability. Also there is a question about how does law prescribe and limit what data points can be collected or reported.

The session closed with a discussion that explored some examples of solutionism  like crowdsourcing that ask the most vulnerable people in society to contribute data about assault against them which is highly problematic. The crime data is popular in portals such as the London one, but it is mixed into multiple  concerns such as property price. David – The utopian concept of platform independence, and assuming that platforms are without values is inherently wrong.

The workshop closed with a discussion of the main ideas that emerged from it and lessons. How are all these things playing out. Some questions that started emerging are questions on how crowdsourcing can be bottom up (OSM) and sometime top-down, with issues about data cultures in Citizen Science, for example. There are questions on to what degree the political aspects of citizenship and subjectivity are playing out in citizen science. Re-engineering information in new ways, and rural/urban divide are issues that bodies such as Ordnance Survey need to face, there are conflicts within data that is an interesting piece, and to ensure that the data is useful. The sensors on legs is a concept that can be relevant to bodies such as Ordnance Survey. The concept of stack – it also relevant to where we position our research and what different researchers do: starting from the technical aspects to how people engage, and the workshop gave a slicing through these layers. An issue that is left outside is the business aspect – who will use it, how it is paid. We need the public libraries with the information, but also the skills to do things with these data. The data economy is important and some data will only produced by the state, but there are issues with the data practices within the data agencies within the state – and it is not ready to get out. If data is garbage, you can’t do much with it – there is no economy that can be based on it. An open questions is when data produce software? when does it fail? Can we produce data with and without connection to software? There is also the physical presence and the environmental impacts. Citizen engagement about infrastructure is lacking and how we tease out how things open to people to get involved. There was also need to be nuanced about the city the same way that we focused on data. Try to think about the way the city is framed: as a site to activities, subjectivity, practices; city as a source for data – mined; city as political jurisdiction; city as aspiration – the city of tomorrow; city as concentration of flows; city as a social-cultural system; city as a scale for analysis/ laboratory. The title and data and the city – is it for a city? Back to environmental issues – data is not ephemeral and does have tangible impacts (e.g. energy use in blockchain, inefficient algorithms, electronic WEEE that is left in the city). There are also issues of access and control – huge volumes of data. Issues are covered in papers such as device democracy. Wider issues that are making link between technology and wider systems of thought and considerations.


Bomb Damage Maps 1939-1945

bombdamage

Several years ago, we featured some striking maps from a small exhibition at the London Metropolitan Archive. Each map was a detailed plan of a small part of London, the basemap being from 1916, with individual houses clearly shown. Many houses were just shown in white, but a number were coloured in various colours – showing which houses had been hit by bombs during the London Blitz in the Second World War, and the level of damage. Additionally, circles show the impact locations of V1 and V2 rockets. The maps were annotated with the damage/impact information by the London County Council, the city’s public authority at the time, shortly after the war finished, as a visual record of the extend and severity of the damage. The concept of colour coding individual houses based on an observed attribute is reminiscent of the famous Booth poverty maps, completed 70 years before, although of course recording a very different attribute.

Previously, you needed to visit the archive yourself and make an appointment to see the maps, but now, Laurence Ward, Principle Archivist of the London Metropolitan Archive, has taken these maps, professionally scanned them and reproduced all 110 in this beautifully presented, large-format hardback book “The London County Council: Bomb Damage Maps 1939-1945”, which is published by Thames & Hudson on Monday 31 August, to mark the 75th anniversary of the first German air raids in London in September 2015.

The first thing that struck us on opening the book is its size. It’s a weighty tome, containing 110 maps scanned at high resolution and reproduced in full colour, most across two pages so presented approximately in A3. It also has a large and detailed introduction including thirty pages of photographs, tables of statistics and background text – indeed the maps start not until Section 8. The book finishes with another 50 pages of well-reproduced black-and-white photographs of the bomb damage and recovery efforts. The author and publisher have taken time to make this a high quality piece, with an attractive font used both for the title and the accompanying text. The inside cover jacket includes a key to the damage colours used in the maps, detachable as a bookmark.

It is striking to see the level of damage that occurred in the City of London – a huge swathe of land is coloured purple “damaged beyond repair”. The map, and photographs, of what is now the Barbican area, show the near complete destruction which resulted in this whole district being redesigned – not even the old road network survived.

The book immaculately completed and is an essential part of any London-phile’s coffee-table book collection. It available to order from all good booksellers, including from Amazon UK.

Thanks to Thames & Hudson for the review copy.

Data and the City workshop (day 1)

The workshop, which is part of the Programmable City project (which is funded by the European Research Council), is held in Maynooth on today and tomorrow. The papers and discussions touched multiple current aspects of technology and the city: Big Data, Open Data, crowdsourcing, and critical studies of data and software. The notes below are focusing on aspects that are relevant to Volunteered Geographic Information (VGI), Citizen Science and participatory sensing – aspects of Big Data/Open data are noted more briefly.

Rob Kitchin opened with a talk to frame the workshop, highlighting the history of city data (see his paper on which the talk is based). We are witnessing a transformation from data-informed cities to data-driven cities. Within these data streams we can include Big Data, official data, sensors, drones and other sources. The sources also include volunteered information such as social media, mapping, and citizen science. Cities are becoming instrumented and networked and the data is assembled through urban informatics (focusing on interaction and visualisation) and urban science (which focus on modelling and analysis( . There is a lot of critique – with relations to data, there are questions about the politics of urban data, corporatisation of governance, the use of buggy, brittle and hackable urban systems, and social and ethical aspects.  Examples to these issues include politics: accepting that data is not value free or objective and influenced by organisations with specific interest and goals. Another issue is the corporatisation of data, with questions about data ownership and data control. Further issues of data security and data integrity when systems are buggy and brittle – there have been cases of hacking into a city systems already. Social, Political, and ethical aspects include data protection and privacy, dataveillance/surveillance, social sorting through algorithms, control creep, dynamic pricing and anticipatory governance (expecting someone to be a criminal). There are also technical questions: coverage, integration between systems, data quality and governance (and the communication of information about quality), and skills and organisational capabilities to deal with the data.
The workshop is to think critically about the data, and asking questions on how this data is constructed and run.

The talk by Jim Thatcher & Craig Dalton – explored provenance models of data. A core question is how to demonstrate that data is what is saying it is and where it came from. In particular, they consider how provenance applies to urban data. There is an epistemological leap from an individual (person) to a data point(s) – per person there can be up to 1500 data attribute per person in corporate database. City governance require more provenance in information than commercial imperatives. They suggest that data user and producers need to be aware of the data and how it is used.

Evelyn Ruppert asked where are the data citizens? Discuss the politics in data, and thinking about the people as subjects in data – seeing people as actors who are intentional and political in their acts of creating data. Being digital mediates between people and technology and what they do. There are myriad forms of subjectivation – there are issues of rights and how people exercise these rights. Being a digital citizens – there is not just recipient of rights but also the ability to take and assert rights. She used the concept of cyberspace as it is useful for understanding rights of the people who use it, while being careful about what it means. There is conflation of cyberspace and the Internet and failures to see it as completely separate space. She sees Cyberspace is the set of relations and engagements that are happening over the Internet. She referred to her recent book ‘Being Digital Citizens‘. Cyberspace has relationships to real space – in relations to Lefebvre concepts of space. She use speech-act theory that explore the ability to act through saying things, and there is a theoretical possibility of performativity in speech. We are not in command of what will happen with speech and what will be the act. We can assert acts through the things we do, and not only in the thing we say and that’s what is happening with how people use the Internet and construct cyberspace.

Jo Bates talked about data cultures and power in the city. Starting from hierarchy in dat and information. Data can be thought as ‘alleged evidence’ (Buckland) – data can be thought as material, they are specific things – data have dimensionality, weight and texture and it is existing something. Cox, in 1981, view the relationship between ideas, institutions and material capabilities – and the tensions between them – institutions are being seen as stabilising force compare to ideas and material capabilities, although the institutions may be outdated. She noted that sites of data cultures are historically constituted but also dynamic and porous – but need to look at who participate and how data move.

The session followed by a discussion, some of the issues: I’ve raised the point of the impact of methodological individualism on Evelyn and Jim analysis – for Evelyn, the digital citizenship is for collectives, and for Jim, the provenance and use of devices is done as part of collectives and data cultures. Jo explored the idea of “progressive data culture” and suggested that we don’t understand what are the conditions for it yet – the inclusive, participatory culture is not there. For Evelyn, data is only possible through the action of people who are involved in its making, and the private ownership of this data does not necessarily make sense in the long run. Regarding hybrid space view of cyberspace/urban spaces – they are overlapping and it is not helpful to try and separate them. Progressive data cultures require organisational change at government and other organisations. Tracey asked about work on indigenous data, and the way it is owned by the collective – and  noted that there are examples in the arctic with a whole setup for changing practices towards traditional and local knowledge. The provenance goes all the way to the community, the Arctic Spatial Data Infrastructure there are lots of issues with integrating indigenous knowledge into the general data culture of the system. The discussion ended with exploration of the special case of urban/rural – noting to the code/space nature of agricultural spaces, such as the remote control of John Deere tractors, use of precision agriculture, control over space (so people can’t get into it), tagged livestock as well as variable access to the Internet, speed of broadband etc.

The second session looked at Data Infrastructure and platforms, starting with Till Straube who looked at Situating Data Infrastructure. He highlighted that Git (GitHub) blurs the lines between code and data, which is also in functional programming – code is data and data is code. He also looked at software or conceptual technology stacks, and hardware is at the bottom. He therefore use the concept of topology from Science and Technology Studies and Actor-Network Theory to understand the interactions.

Tracey Lauriaultontologizing the city – her research looked at the transition of Ordnance Survey Ireland (OSi) with their core GIS – the move towards object-oriented and rules based database. How is the city translated into data and how the code influence the city? She looked at OSi, and the way it produce the data for the island, and providing infrastructure for other bodies (infrastructure). OSi started as colonial projects, and moved from cartographical maps and digital data model to a full object-oriented structure. The change is about understanding and conceptualising the mapping process. The ontology is what are the things that are important for OSi to record and encode – and the way in which the new model allows to reconceptualise space – she had access to a lot of information about the engineering, tendering and implementation process, and also follow some specific places in Dublin. She explore her analysis methods and the problems of trying to understand how the process work even when you have access to information.

The discussion that follows explored the concept of ‘stack’ but also ideas of considering the stack at planetary scale. The stack is pervading other ways of thinking – stack is more than a metaphor: it’s a way of thinking about IT development, but it can be flatten. It gets people to think how things are inter-relations between different parts. Tracey: it is difficult to separate the different parts of the system because there is so much interconnection. Evelyn suggested that we can think about the way maps were assembled and for what purpose, and understanding how the new system is aiming to give certain outcomes. To which Tracey responded that the system moved from a map to a database, Ian Hacking approach to classification system need to be tweaked to make it relevant and effective for understanding systems like the one that she’s exploring. The discussion expanded to questions about how large systems are developed and what methodologies can be used to create systems that can deal with urban data, including discussion of software engineering approaches, organisational and people change over time, ‘war stories’ of building and implementing different systems, etc.

The third and last session was about data analytics and the city – although the content wasn’t exactly that!

Gavin McArdle covered his and Rob Kitchin paper on the veracity of open and real-time urban data. He highlighted the value of open data – from claims of transparency and enlighten citizens to very large estimation of the business value. Yet, while data portals are opening in many cities, there are issues with the veracity of the data – metadata is not provided along the data. He covered spatial data quality indicators from ISO, ICA and transport systems, but questioned if the typical standard for data are relevant in the context of urban data, and maybe need to reconsider how to record it. By looking at 2 case studies, he demonstrated that data is problematic (e.g. indicating travel in the city of 6km in 30 sec). Communicating the changes in the data to other users is an issue, as well as getting information from the data providers – maybe possible to have meta-data catalogue that add information about a dataset and explanation on how to report veracity. There are facilities in Paris and Washington DC, but they are not used extensively

Next, Chris Speed talked about blockchain city – spatial, social and cognitive ledgers, exploring the potential of distributed recording of information as a way to create all forms of markets in information that can be controlled by different actors.

I have closed the session with a talk that is based on my paper for the workshop, and the slides are available below.

The discussion that followed explored aspects of representation and noise (produced by people who are monitored, instruments or ‘dirty’ open data), and some clarification of the link between the citizen science part and the philosophy of technology part of my talk – highlighting that Borgmann use of ‘natural’,’cultural’ and ‘technological’ information should not be confused with the everyday use of these words.


Yet One Writing Website Falls under Scrutiny. Loser or Top Service?

Personal-statements-biz is the way-wise writing service that has gained a large number of people. Young peope throughout the world assing their academic concerns to the academic writing service. What is that that enchants them above all things? Let’s discover beneficial offers unveiled at this particular service.

Called the- experienced academic company, it makes available a broad range of word and non-word assignments. Commonly, these enclose essay writing, together with all academic projects a colleger may face. personal statement web site http://personal-statements.biz Additionally, the online writing agency suggests text checkup and correction as follow-up package of services. The diversity of topical areas the- company has advanced knowledge in is also great Learners are allowed to opt between something about 50 disciplines. Even more, a person ordering from the service has an option to adapt the task to certain attributes including stylistic coloring of the language, performance quality standard, amount of pages and also time-frames of the piece of work. The pleasant thing is, the company guarantees acceptable pricing coupled with flexible discounts and various client friendly propositions open for every client.

Valid guarantees and quality essay writing are main targets

Buyers who have ever bought papers from personal-statements.biz admit its fail-safety and business ethics. In the first flight the online academic resource gives attention to high standard of the job tasks it deivers. http://personal-statements.biz This means that any paper is guaranteed to be plagiarism-free. If you ask how the resource follows anti-plagiarism policy, you’ll get a clear response. content originality is achieved by computerized program along with competent editors. One more peculiarity which makes the company draw students’ attention is – swift fulfillment of each paper. Low number of violated deadlines indicates company’s professionalism. Customers’ data security isn’t left in doubts either. Users’ personal info is well-protected from free access. Even so that’s not the full list. A number of promises comprises compensation in case of order failure plus unlimited revision over two weeks.

The company is also said to take on the well-versed team of academic writers who are qualified both in essay writing and other genres. Soon after a one has made – order he/she are sure to adore dealing with personal specialist assigned on the basis of specific aspects of the order. http://personal-statements.biz The same goes about customer care representatives that students can address all their inquiries and be offered professional treatment within 24 hours. These advantages combined make the custom writing company – top-ranked one.

Essay Composing Service Overview: Charges and Offers

Clients are free to specify the charge of the work by opening the order form.On this page clients will see a price calculating app and count currency they have to pay for the paper.The things that designate the price entail the following: your educational level, deadline and the quantity of pages. Filling the order form, students need to add all necessary details for assignment to assure that your helper understands everything correctly.Clients would better order an essay ahead to get the cheaper paper.

personal-statements.biz offers a lot of rake-offs. Users achieve 10% off the first paper they order and 10% off the paper price to the credit balance. loyal clients obtain 5% discounts and use a loyalty schemethat essay writing gives a 5% off order cost to the balane. Furthermore, every student gets Birthday and different holidays discounts.

Customer support quality and site usability

The support team could be contacted by means of different communication channels. They are ready to help you around-the-clock.

The webpage of essay writing is simple to work with. The handiness of the webpage is determined by the fact that a visitor immediately gets an idea where he or she can access all answers to any questions. By the dint of easy-to-use navigation, clinets might simply browse all divisions of the site and find the answers to the concerns about the ordering process, the pricing policy, the service itself, browse a branch of customers’ impressions and more. A good way for an immediate contact with support team is a live chat on the service’s website that makes you capable to to to put a question to the customer support representatives and receive an immediate answer – their answers are very quick. Whereaspurchasers did not retrieveall the necessary information on the webpage, buyers are free to make a call or write an email (the phone number and the address can be found on the main page). As for the design, it is nice, looks modern and does not distract you with annoying bright colors.

Jeremy Corbyn’s women-only carriages: the arguments for and against – CityMetric


CityMetric

Jeremy Corbyn's women-only carriages: the arguments for and against
CityMetric
... gentrification, deprivation and property market processes inherent in this urban change – and what future city centres and suburbs will be like. Dr Duncan Smith is a teaching fellow at the Centre for Advanced Spatial Analysis at University college ...

and more »

Pixelsticking in Hampstead and the Olympic Park

DCIM100GOPROGOPR0253.

The last weeks have seen me, Steve Gray, Carina Schneider and Bianca Winter at large in Crystal Palace, East London, and Camden, painting with light and creating “temporary graffiti” – or virtual installations, if you will.

The pixelstick is best thought of as a light painting wand. You may be familiar with traditional light painting – where we use long-exposure photography to pick out bright trails within the image:

DCIM100GOPROGOPR0319.
(Normally, we hold the camera still and move the light sources, but here we did the opposite)
The Pixelstick works in a similar way, but rather than using a point to create a line, it uses a 1D line (of LEDs) to create a 2D image. It’s also clever enough to vary what those individual LEDs show, so if you walk along with it, it will draw a picture, row by row:
We began our pixelstick adventures late last month – thanks to the UCL Public Engagement Unit Pathways grants, we were able to buy one, and a GoPro for capturing the images – but July is not the best month to start nighttime photography. However, this summer Camden council celebrates their 50th Anniversary, so we though we would mark it by trying out the pixelstick at iconic blue plaque locations. We picked a cluster that included some visual artists – or there could be a bit of a danger that we’d just be painting lots of men with beards. First up, sculptor Henry Moore:
Henry Moore's sculpture, near to his old Hampstead home.

Henry Moore’s sculpture, near to his old Hampstead home.

And next, artist Piet Mondrian:
DCIM100GOPROGOPR0213.
We’d intended this to be a bit of a test run, but our activities got a lot of attention. Residents did come and talk to us about what we were up to, and one of the people living in Mondrian’s old house even helped us to make the image.
Screenshot 2015-08-25 16.17.44
What’s better than Richard Burton? An army of Richard Burtons, of course:
DCIM100GOPROGOPR0215.
From a technical perspective, I think this image provides a lot of ideas. You can see the light from the pixelstick shaded as it passes through a car window, and reflected on the surface of the car bonnets. It would be very easy to recreate the effect of a flat image perpendicular to the viewer in Photoshop (other software is available), so if we could do that, why would we mess around with a pixelstick? The pixelstick seems to shine* when we create images which reflect its embeddedness in 3D space – using oblique views, curving the image, having objects in front and behind – or showing lighting effects that are easy to produce in situ but hard in post – complex or curved reflections, tinted glass, and so on.
So these are our first few images – our next stop is the Olympic Park for some re-creation of past glories. If you’re interested in seeing some of these, why not visit our exhibits at Park Life**, part of the UCL Spark Festival on Sunday 30th or Monday 31st August?
DCIM100GOPROGOPR0283.

*sorry
**not my title

Online Dating Is Not Dead: How To Increase Your Chances Of Finding ‘The One … – The Inquisitr


The Inquisitr

Online Dating Is Not Dead: How To Increase Your Chances Of Finding 'The One ...
The Inquisitr
In a recent article published in Vanity Fair, author Nancy Jo Sales declared online dating to be dead. According to the author, young people are using Tinder just to have sex, while others are settling for less even when they may want more. So, she ...

and more »

This mathematical principle reveals the type of online dating profile photo … – Business Insider


Business Insider

This mathematical principle reveals the type of online dating profile photo ...
Business Insider
Hannah Fry, a mathematician at the UCL Centre for Advanced Spatial Analysis in London, explains the theory in her 2014 TED Talk and recently released book, "The Mathematics of Love." When most people choose their online dating profile pictures, she ...

and more »

‘Nature’ Editorial on Citizen Science

The journal Nature published today an editorial on citizen science, titled ‘Rise of the citizen scientist’. It is very good editorial that addresses, head-on, some of the concerns that are raised about citizen science, but it is also have a problematic ending.

On the positive side, the editorial recognises that citizen scientists can do more than just data collection. The writer also demonstrated an inclusive understanding of citizen science that encompass both online and offline forms of participation. It also include volunteered computing in the list (with the reference for SETI@Home) and not dismiss it as outside the scope of citizen science.

It then show that concerns about the ability of citizen scientists to produce high quality data are not supported by research findings and as Caren Cooper noted, there are many other examples across multiple fields. My own minor contribution to this literature is to demonstrate that this is true for OpenStreetMap mappers. It also recognises the important of one of the common data assurance methods – the reliance on instrument reading as a reason to trust the data.

Finally, it recognise the need to credit citizen scientists properly, and the need to deal with their personal details (and location) carefully. So far, so good. 

Then, the article ends with rather a poor paragraph about ‘conflicts of interest’ and citizen science:

More troubling, perhaps, is the potential for conflicts of interest. One reason that some citizen scientists volunteer is to advance their political objectives. Opponents of fracking, for example, might help to track possible pollution because they want to gather evidence of harmful effects. When Australian scientists asked people who had volunteered to monitor koala populations how the animals should be managed, they found that the citizen scientists had strong views on protection that did not reflect broader public opinion.

Checking for air qualityI have already written here about the attitude of questioning activism and citizen science in specific local issues, but it seem that motivations especially irk scientists and science writers when they look at citizen science. So here some of the reasons that I think the claim above is contradictory.

There are two reasons for this: first, that scientists themselves have a complex set of motivations and are under the same ‘conflict of interests’ and secondly, if motivations having such an impact on science in general, than this is true for every science, not just citizen science.

Let’s start with the most obvious one – the whole point in the scientific method is that it investigates facts and conditions regardless of the motivation of the specific person that is carrying out the research. I have a reminder of that every day when I go to my office, at UCL’s Pearson Building. The building is named after Karl Pearson (known to any scientist because of the Pearson correlation), who was one of the leaders of Eugenics, which was the motivation for parts of his work. While I don’t like the motivation (to say the least) it doesn’t change the factual observations and analysis of the results though it surely change the interpretation of them, which we today reject. We therefore continue to use Pearson’s methods and science since they are useful despite of the motivation. We have detached the motivations from the science.

More generally, scientists like to believe that they are following Mertonian Norms and that they are ‘disinterested’ in their research – but listen to some of the episodes of the BBC Life Scientific and you discover that what keep them motivated to apply for research grants against the odds and to carry out long stretches of boring work are very deep personal motivations. They wouldn’t do it otherwise! Therefore, according to the paragraph above we should consider them conflicted.

Citizen Scientists are, of course, motivated by specific interests – they wouldn’t volunteer their free time otherwise. Look at the OED definition of citizen science at the sources of the term, and you discover that the first modern use of the term ‘citizen scientists‘ was in a report about the Audubon effort to campaign about acid rain. The fact that it was activism did not influence the very careful data collection and analysis operation. Or take the Royal Society for the Protection of Birds (RSPB) in which ‘Campaign with us‘ is the top option of ‘what we do’, and yet they run the valuable Big Garden Bird Watch with results used in scientific papers and for policy. The source of the activism, again, does not influence the outcomes, or the quality of the science.

Is it some forms of activism that Nature have a problem with?

The value of using citizen science in cases such as fracking, air quality or noise is that the scientific method support a systematic, disinterested, and objective data collection and analysis. It therefore allows to evaluate concerns about a specific issue and check if they are justified and supported by the evidence or not. In the same way that the environmental impact assessment and report from the fracking operators are created from a point of conflicts of interest, so does the data that come from the people who oppose it. As long as the data is being collected in a rigorous way, with evidence to back that it was done this way (e.g. timestamp from the smartphone, as the article noted) the scientific approach can provide evidence if the level of pollution from the fracking site (or planned site) is acceptable or not. Arguably, the risk of falsifying the data or pressure to drop inconvenient observations is actually greater, in my view, from the more powerful side of the equation.

My conclusion is that you can’t have it both ways: either science work regardless of motivations or the motivations and conflicts of interest are central to every other piece of science that Nature report on. 


Searching Twitter with ArcGIS Pro Using R

I committed to testing this a long time ago, however, a number of other projects intervened, so I have only just got around to writing up this short tutorial. One of the exciting things from the ESRI Developers Conference this year was the launch of the R-ArcGIS bridge. In simple terms, this enables you to run R scripts from within ArcGIS and share data between the software. In fact, this is all explained in a nice interview here.

I won't go into detail about the R script itself, and the code can be found on github. If I am honest, this is pretty rough, and was written to demonstrate what could be done - that said, it should be usable (I hope... but don't complain if it isn't!). ESRI have also provided a nice example which can be found here, and was the basis of my code.

Preparing R

Before you can link ArcGIS Pro to R, you need to install and load the ‘arcgisbinding’ package, which is unfortunately not on CRAN. There are instructions about how to do this here using a Python toolbox; however, I preferred a more manual approach.

Open up R and run the following commands which installs the various packages used by the toolbox. You might also need to install the Rtools utilities as you will be compiling on Windows (available here). Although the TwitteR and httr packages are available on CRAN, for some reason I have been having issues with the latest versions failing to authenticate with Twitter; as such, links to some older versions are provided.

#Install the arcgisbinding package
install.packages("https://4326.us/R/bin/windows/contrib/3.2/arcgisbinding_1.0.0.111.zip", repos=NULL, method="libcurl")

#Install older versions of the TwitteR and httr packages
install.packages("https://cran.r-project.org/src/contrib/Archive/twitteR/twitteR_1.1.8.tar.gz", repos=NULL, method="libcurl")
install.packages("https://cran.r-project.org/src/contrib/Archive/httr/httr_0.6.0.tar.gz", repos=NULL, method="libcurl")

#Load the arcgisbinding package and check license
library(arcgisbinding)
arc.check_product()

Creating a Twitter Search App

Before you can use the Twitter Search Tool in ArcGIS Pro, you first need to register an app with Twitter, which gives you a series of codes that are required to access their API.

  1. Visit https://apps.twitter.com/ and log in with your Twitter username and password.
  2. Click the "Create New App" button where you will need to specify a number of details about the application. I used the following
  3. Name: ArcGIS Pro Example
  4. Description: An application testing R integration with ArcGIS Pro and Twitter
  5. Website: http://www.alex-singleton.com
  6. I left the callback URL blank, then checked the "Yes, I agree" to the developer agreement, and clicked the "Create your Twitter application" button.
  7. On the page that opens, you then need to click on the "Keys and Access Tokens" tab. You need four pieces of information that enable the Toolbox to link up with Twitter. The first two are displayed - "Consumer Key (API Key)" and the "Consumer Secret (API Secret)". You then need to authorize this application for your account. You do this my clicking the "Create my access token" button at the base of the page. This creates two new codes which are now displayed - "Access Token" and "Access Token Secret". You now have the 4 codes required to run a Twitter search in ArcGIS Pro.

R Script

I created an R script that: 1. Authenticates a session with Twitter 2. Performs a search query for a user specified term within a proximity (10 miles) of a given lat / lon location 3. Outputs the results as a Shapefile in a folder specified

The inputs to the script include the various access codes, a location, a search term and an output file location. These variables are all fed into the script based on Toolbox inputs. Getting the inputs is relatively simple - they appear in the order that they are added to the Toolbox, and are acquired via in_params[[x]] where x is the order number; thus search_term = in_params[[1]] pulls a search term into a new R object called "search_term". The basic structure of a script are as follows (code snippet provided by ESRI):

tool_exec <- function(in_params, out_params) {
        # the first input parameter, as a character vector
        input.dataset <- in_params[[1]]
        # alternatively, can access by the parameter name:
        input.dataset <- in_params$input_dataset

        print(input.dataset)
        # ... do analysis steps

        out_params[[1]] <- results.dataset
        return(out_params)
      }

For more details about the functions available in arcgisbinding, see the documentation located here

How to use the Twitter Search Tool

The Twitter Search Tool was run within ArcGIS Pro and requires you to add a new toolbox. The toolbox should be downloaded along with the R script and placed in a folder somewhere on your hard drive. The files can be found on github here.

  1. Open ArcGIS Pro and created a new blank project called Twitter Map.
  2. Create a new map from the insert menu
  3. From the map tab, click the "basemap" button and select the OpenStreetMap tile layer
  4. Zoom into Liverpool on the map using the navigation wheel
  5. Find the latitude and longitude of map centre. These are recorded just under the map on the window border. The centre of Liverpool is approximately -2.95 (longitude), 53.4 (latitude) (although displayed as 002.95W, 53.40N)
  6. Click on the "Insert" menu, the "Toolbox" and then "Add Toolbox" buttons. Navigate to the folder where you have the Toolbox and R script. Click on the Twitter.tbx file and press the "Select" button.
  7. If you don't see a side bar called "Geoprocessing", then click on the "Analysis" tab and press the "Tools" button. After this is visible, under the search box there is a "Toolboxes" link. Click this and you will see the Twitter toolbox listed. If you look inside the toolbox you will see the Twitter Search script - click on this to open.
  8. Enter a search term (I used "Beatles" - hey we are in Liverpool), the Twitter authentication details, the location and where you want the output Shapefile stored. This defaults to the geodatabase associated with the project; however, you can browse to a folder and specify a Shapefile name - e.g. Twitter_Beatles.shp.
  9. Press the "Run" button and with luck you should now have a Shapefile created in the folder specified.
  10. Add the results to your map by clicking on the "Map" tab, then the "Add Data" button. Browse to where you saved the Shapefile and click the "Select" button.

The following screenshot is of the Shapefile shown on an OpenStreetMap basemap; with the attribute table also shown - you will see that the full Tweet details are displayed as attributes associated with each point.

pbec

Anyway, I hope this is of use and can assist people getting started linking R to ArcGIS.

Living Somewhere Nice, Cheap and Close In – Pick Two!

eastsheen

When people decide to move to London, one very simple model of desired location might be to work out how important staying somewhere nice, cheap, and well located for the centre of the city is – and the relative importance of these three factors. Unfortunately, like most places, you can’t get all three of these in London. Somewhere nice and central will typically cost more, for those reasons; while a cheaper area will either be not so nice, or poorly connected (or, if you are really unlucky, both). Similarly, there’s some nice and cheap, places, but you’ll spend half your life getting to somewhere interesting so might miss out on the London “experience”. Ultimately, you have to pick your favoured two out of the three!

Is it really true that there is no magic place in London where all three factors score well? To see the possible correlations between these three factors, I’ve calculated the ward averages for these, and plotted them. Wards are a good way to split up London – there are around 600 of them, which is a nice amount of granularity, and importantly they have real-world names, unlike the “purer” equivalent Middle Super Output Areas (MSOAs). Using postcode “outcodes” would be even better, as these are the most familiar “coded” way of distinguishing areas by non-statisticians, but statistical data isn’t often aggregated in this way.

To show all three variables, I’ve created a 3D plot, using High Charts. Have a look at the plot here. The “sweet” spot is point 0,0,0 (£0/house, 0 score for deprivation, 0 minutes to central) on the graph – this is at the bottom left as you first load it in.

Use your mouse to spin around the graph – this allows you to spot outliers more easily, and also collapse down one of the variables, so that you can compare the other two directly on a 2D graph. Unfortunately, you can’t spin the graph using touch (i.e. on a phone/tablet) however you can still see the tooltip popups when clicking/hovering on a ward. Click/touch on the borough names, to hide/show the boroughs concerned. Details on data sources and method used are on the graph’s page.

The curve away from the sweet spot shows that there is a reasonably good inverse correlation between house prices and deprivation, and house prices and nearness to the city centre. However, it also shows there is no correlation between deprivation and nearness. Newington is cheap and close in, but deprived. Havering Park is cheap and a nice area, but it takes ages to get in from there. The City of London is nice and close by – but very expensive. Other outliers include Merton Village which is very nice – but expensive and a long way out, while Norwood Green (Ealing) is deprived and far out (but cheap). Finally, Bishop’s in Lambeth is expensive and deprived – but at least it’s a short walk into the centre of London.

Try out the interactive graph and find the area you are destined to live in.

kingspark

p.s. If you are not sure where your ward is, try clicking on the blobs within your borough here.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Beyond quantification: a role for citizen science and community science in a smart city

Arduino sensing in MaltaThe Data and the City workshop will run on the 31st August and 1st September 2015, in Maynooth University, Ireland. It is part of the Programmable City project, led by Prof Rob Kitchin. My contribution to the workshop is titled Beyond quantification: a role for citizen science and community science in a smart city and is extending a short article from 2013 that was published by UCL’s Urban Lab, as well as integrating concepts from philosophy of technology that I have used in a talk at the University of Leicester. The abstract of the paper is:

“When approaching the issue of data in Smart Cities, there is a need to question the underlying assumptions at the basis of Smart Cities discourse and, especially, to challenge the prevailing thought that efficiency, costs and productivity are the most important values. We need to ensure that human and environmental values are taken into account in the design and implementation of systems that will influence the way cities operate and are governed. While we can accept science as the least worst method of accumulating human knowledge about the natural world, and appreciate its power to explain and act in the world, we need to consider how it is applied within the city in a way that does leave space for cultural, environmental and religious values. This paper argues that a specific form of collaborative science – citizen science and community science – is especially suitable for making Smart Cities meaningful and democratic. The paper use concepts from Albert Borgmann’s philosophy of technology – especially those of the Device Paradigm and Focal Practices, to identify the areas were sensing the city can gain meaning for the participants.”

The paper itself can be accessed here.

Other papers from the same workshop that are already available include:

Rob Kitchin: Data-Driven, Networked Urbanism

Gavin McArdle & Rob Kitchin: Improving the Veracity of Open and Real-Time Urban Data

Michael Batty: Data About Cities: Redefining Big, Recasting Small

More details on the workshop will appear on the project website


Extra Detail in DataShine Commute

scotlandcommute_detailed

We’ve made three changes to the DataShine Commute websites:

  1. For DataShine Scotland Commute we have made use of a new table, WU03BSC_IZ2011_Scotland, published recently on the Scotland’s Census website, which breaks out small-area journeys by mode of transport, in the same way that the England/Wales data does.
    The small-area geography used, Intermediate Geography “IG”, is broadly equivalent to the MSOAs used in England/Wales although the average population is half the size, so we show the lines twice as thickly. There is some additional grouping in the Scotland data – metro services (i.e. Glasgow’s Clockwork Orange) are combined with rail, and commutes by taxi and motorbike are moved into “Other”.

    Looking at the data reveals some characteristic patterns which might be expected, for example, on the edge of Edinburgh, the commute to that point is from outside of the city, and from that point to closer in to the city centre. This effect is strongly also seen around London.

  2. For DataShine Commute (England/Wales) we now include numbers, in the summary table for each area, for commuters living in that area who work in Scotland, in Northern Ireland, outside the UK, at home, in no fixed location or on offshore installations.
    These numbers, along with those for people who work elsewhere within the area, are shown in grey in the table. None of these seven special categories are shown as lines on the map.

  3. Finally, we have expanded and renamed the previous DataShine Scotland Commute map which we introduced last month along with DataShine Scotland. The previous map was at a coarse level (showing only flows between local authorities) and was always intended to be a stop-gap until the above more granular data was released. Rather than removing this website, we have decided to expand it to include the data from England, Wales and Northern Ireland too, and show flows between these places as well. This was generally straightforward to do as the Office for National Statistics published a UK-wide table at local-authority level. Constructing the Northern Ireland part of the map was less trivial as the local authority boundary files there are not straightforward to obtain, and needed to be derived.

    The new website is DataShine Region Commute. For visual clarity, we have colour coded the different nations within the UK.

Data and the City

data-city

The Programmable City Project is holding a meeting on Data and the City in Maynooth from August 31- September 1st. You can get Rob Kitchin’s and Mike Batty’s papers from this blog but the programme is diverse and the Project web site contains details of the various contributions. I argue in my paper that we need to redefine big data in terms of the tools we use to interpret it and that size is not the main criterion as quite modest data sets in cities, particularly those dealing with transport and flows, are too large for most of current tools which involve statistical manipulation and visualisation. Here are two of the papers to be presented:

Michael Batty: Data About Cities: Redefining Big, Recasting Small

Rob Kitchin: Data-Driven, Networked Urbanism

The image above was produced by Stephan Hugel and Flora Roumpani from their animation of tweets in London Using City Engine. Click here for the movie.

 

 

 

 

Science-Society Dialogue – from Citizen Science to Co-Design (ICCB/ECCB 2015 – Day 4)

The final day of the ICCB/ECCB 2015 (see my notes on citizen science sessions from Day 1, Day 2 and Day 3) included a symposoium that was organised by Aletta Bonn and members of the European Citizen Science Association (ECSA) to explore the wider context of citizen science. The symposium title was Science-Society Dialogue – From Citizen Science To Co-Design. The 6 talks of the session (including mine) were:

Lucy Robinson - 10 principlesTen principles of citizen science: Sharing best practice amongst the citizen science community – Lucy Robinson (NHM) – the London NHM have been active in citizen science for the past 10 years, though indirectly for much longer. They see the importance of developing citizen science as a field, and especially through networks suc as ECSA – a network of different people who are involved in citizen science – advancing the field and sharing knowledge. There are different definitions of citizen science, but it is important to think about best practices, and part of the work in ECSA Lucy leads the effort to share best practice. This includes the development of the 10 principles of citizen science, which can be summarised as:
1. Involve citizens in the process in a meaningful way.
2. Activies should have a genuine science outcomes.
3. All involved should benefit.
4. Citizen scientists may participate in multiple stages of the scientific process.
5. Providing feedback to participants.
6. Citizen science should be considered as a reserch approach and understanding. the limitations, biases and not over estimating what is possible.
7. Data and metadata should be made available and results should be open access.
8. Aknowledging pariticpants in results.
9. need for evaluation for scientific output, data quality, particpant experience and wider social and policy impacts.
10. Need to pay attention to legal and ethical issues of copyright, IP, data sharing, confidentiality, attribution, and environmental impacts.
The ten principles are open to development over time and the aim of having that is to help with the challeges in the field – such as duplication of efforts, mixed messages, and there are opportunities for collaorations and partnerships. They can help new joiners to start with best practices. There are other tools to improve the work of practitioners – including the 2012 guide on understanding citizen science & environmental monitoring which covered 150 projects. The report identified that one size doesn’t fit all and they identified that projects need to learn from others. There are guides for BioBlitzes and how to conduct them, and there are guides for choosing citizen science, evaluation tools from CLO (See Tina Philipps talk from yesterday).

Helen Roy - 51 years of BRCIn Celebrating 50 years of the biological records centre – Helen Roy covered the history fo the UK Bilogical Record Centre (BRC). The BRC coordinates 85 recording schemes and societies in the UK which are covering wide range of taxa, with publications of atlases in different topics that are covered by these programmes. The people that are involved in these schemes provide a lot of data, and to celebrate it there is a several papers on the 50 years of the BRC in the Bilogical Journal. Bilogical recording have developed with different ways – biloigcal recording don’t have a specific scientific aims – just passion about collecting and identifying the different taxa. The national schemes are diverse – from 500 members of a bees, wasps & ants recording charity or a leafhoppers scoiety that is more ad-hoc, to the completely ad-hoc ladybird recording survey, with 17,000 recorders. All the different schemes are lead by an individual, but involved a wide variety of people and there are now programmes that are involving many young people, which is important for the future of recording. There are mutual beneftis – the recorders provide information but they get tools that help them – even stacking envelopes and sending newsletters, as well as data management, website design, editing atlases etc. The BRC is benefits from working with wide range of volunteer experts, and use the data for many pupores. The core activity is to create the National Biodiversity Network (NBN) – collect, review, share, publish and integrate the information. There are different technologies that support it from iRecord to NBN Gateway. Examples of how the Data was used include the analysis of invasion of alien species, as well as predictions of invasive species, informing UK biodiversity indicators, demonstrating impacts of climate change and modelling future distributions. The environmental challenges require a lot of data, and through this extensive community. (summary of her previous talk on the history of BRC at the BES 2014 )

Marisa Ponti - OER valuePotential of digital technologies to enhance openness in learning and science – Marisa Ponti – many citizen science project still happen off line, and there are many digital technologies that can be used to share and use the data. However, it is worth thinking about the potential of educataional resources that can be used in such programmes. Open education and resources – learning, teaching and research that is in the public domain under open licence for reuse and modification has a role to play. Openness and access are important to citizen scientists – it can be increased and improve in the outputs of citizen science projects. Outputs are not the final publications, but also the  data, protocols, logs and systems. Open Education Resources (OER) can help in make ideas and scientific knowledge accessibe, inspire people to be involved so they are not just passive participants, and can also help to meet funders’ requirements to make the research open. OER can help in reimagening what science is – can build a community outside institutional settings – such as Cornel Lab of Ornithology. It can also support self-driven and peer-based learning appraoches, allowing people to run their own investigation, and OER can support experimentation with open practices. There is a specific website in the OER area of citizen science for learning and research. Resources help in creating suitable teaching sessions. There are other training material that can be reused and changed. There are, however, warning – the conditions for broad practicipation – OER in themselves with digital technology are not a solution unless we create the conditions for engagement of many people. There is a need to create the condition to allow participants to own the project. OER need to be dialogical in terms of how people use them.

Learning in citizen scienceCitizen science, social learning and transforming expertise – Taru Peltola – She discuss the learning in citizen science with a paper that is currently under review (part of the ALTER-Net). In citizen science there is plenty of rhetorics – transparency, local knowledge, democracy … but social learning is usually seen with broader benefits that are related to citizen science and didn’t receive enough attention. There is a need to ciritically analyse the learning within citizen science, and learning is an important mechanism that require mutual learning (by participants, organisers and scientists), and learning can occour in all types of citizen science initiatives. Looking at literature on learning, there are questions on the outcomes (facts, instruments), process (individual/social/institutional), and who is involved (scientists/volunteers). It is wrong to assume that only the volunteers learn in citizen science – there are also important learning that the scientists get from the process. To gain more understanidng, they looked at 14 cases across Europe – mostly monitoring species, but also cultural ecosystem services through participatory GIS or reindeer herding. The results from the cases are that the learning processes and outcomes are both intended and unintended, the learning is situated, the learning are unevenly distributed – need to pay attention who is getting the attention and how people are included, and the learning outcomes are continuous. They also found out that factual and instrumental learning outcomes are easier to assess, but it is important to pay special attention to the social and instiutional process. These need to included in the design and implementation of citizen science projects.

Extreme citizen science: the socio-political potential of citizen science – Muki Haklay – in my talk, I have situated citizen science within the wider changes in access and use of environmental information. I have used the framework of 3 eras of environmental information (covered in details in the talk in the Wilson Center). The first two eras (between 1969-1992 and 1992-2005) are characterised by experts who produce environmental information and use it to advise decision makers. In the second era, information is shared with the public, but in unidirectional way – experts produce and release information to the public in a form that is suitable to share with other experts – so it is challenging to comprehend it. While the role of civic society and NGOs was recognised in the second era (e.g. Rio’s Principle 10), in terms of citizen science, the main model that was acceptable was the contributory model in which volunteers focus on data collection, so the information is verified by experts. With the third era (since 2005), we are seeing that the public is also accepted as producer of environmental information. This transition is opening up many opportunities for citizen science activities within environmental decision making. However, looking at the state of the art of citizen science, there is plenty of scope of involving people much more in the process of setting up citizen science projects, as well as engaging people with lower levels of education. I used 3 classifications of participation in citizen science (slides 14-16) to demonstrate that there is a range of ways to participate, and that different issues and different people can participate at a level that suit them and their life.
After introducing the vision of ‘Extreme Citizen Science’, I demonstrated that it is a combination of participatory process and use of technology. I introduced the participatory process of Mapping for Change, which deliberately starts with less use of technology so people can discover the issues that they would like to explore, and then decide how system such as Community Maps can be used to address their issues. I introduced GeoKey, which provides the infrasturucture for participatory mapping system (such as Community Maps), and then demonstrated how Sapelli (data collection tool for low literacy participants) can be used in a careful participatory process with indigenous groups to design suitable citizen science projects. I used examples from the Congo basin and the work of Gill Conquest, the Amazon in Brazil-Peru border work of Carolina Comandulli and the current crowdfunding effort in Namibia for the Ju|’hoansi people by Megan Laws. I ended with a note that intermediaries (such as conservation organisations) have an important role to play in facilitating citizen science and helping in maintaining and sharing the data. The slides from the talk are provided below.

Annet Mihatsch - German Citizen Science StrategyThe final talk was citizen science strategy 2020 for Germany by Anett Richter – the ‘citizen screate knolwedge – knowledge create citizens’ project is a German Citizen Science capacity building project: it includes building citizen science platform, scientific evalution of citizen science, developing resources for teaching and developing projects and a citizen science strategy 2020 for Germany. The need for strategy is that it helps focus on a problem and thinking about how to solve it. There are many projects already happening in Germany, with museums and NGOs, as well as conservation organisations. Lots of technologies are enabling it. However, we don’t have common understanding of where we want to go? Need framework for data use, there are risks of inconsistent communication to stakeholders. The way to open the strategy is involve wide range of stakeholders in the development – public, politicians, funders, community. The wider engagement in development strategy, require time and resources and there might be lack of public interest. They run 5 dialogue forums on different issues with 400 people involved. They explore capacities in science – think of science culture for citizen science – rewards for scientists to do so. Strong data infrastructure – data quality, validation, database management and other issues. Their vision – in 2020 citizen science is integral part of German society and open in all areas of science and for all people. Also want to have reliable web-based infrasturucture. They will carry out consultation online in the autumn and publishing the strategy next year.

 


Notes from ICCB/ECCB 2015 (Day 3) – Citizen Science, engaging local knowledge and urban areas

The third day of the ICCB/ECCB 2015 (here are notes from first and second days) was packed with sessions about citizen science and local knowledge throughout the day (so this post is very very long!). It started with two sessions on citizen science / public participation in science that included the following talks:

Citizen science online: Producing high-quality data from camera trap images (Alexandra Swanson, University of Oxford) looking at a crowdsourcing process – the growth of use of technology in conservation produce huge amount of data – images are especially an issue as they are produced from camera traps, drones etc. and difficult to analyse with computers. She describe snapshot Serengeti – in which many camera traps are used: half a million images a year. They teamed with Zooniverse to set a system for volunteer classification. The website allow people to classify without being experts, without limitation of participation other than web connection. They had 1m classifications in the first 3 days of operations. To get best data possible, each image is sent to multiple people and there is no ‘I don’t know’ so to ensure that everything is being used. Looking at multiple users we can see the level of agreement between them. When people disagree on what they’ve see, there will be high level of disagreement in their classification. The aggregate results over multiple volunteers, then they have certainty metrics (how confident the final answer is) and compare a subset with expert answers. People are 97% correct – agreement with experts’ analysis is very common. Accuracy varies by species – some species are more commonly missed (false negative) and reported when they are not there (reported when not there). Rhinos, for example, suffer from high false positives (people want to see them). To improve classification, analysed errors against total pictures and found that rare species are harder than others – and moderate erros for these rare species is the purpose of camera trapping projects. False negatives are harder to identify. The classification by multiple users allows the development of a disagreement metric and they can see in wrong images high level of disagreement. Below the 0.75 disagreement score is 98.2% accurate. Therefore it can be used to target volunteer effort. They calculated dynamic improvement in quality as the number of classifiers increase – for more difficult images, you use more volunteers… conclusion: can dynamically target the volunteers and expert effort to make the most of the effort. Zooniverse is increasing the potential of starting new citizen science projects.

Corporate citizen science; A novel tool for tackling environmental sustainability? – (Jenny Cousins, Earthwatch Institute).  Citizen Science has multiple goals, and the interaction with the corporate sector is important for EarthWatch as they have a critical role to play in different activities. Freshwater Watch is part of a wider HSBC programme, and bring a global community of scientists and participants – 5000 citizen science (HSBC employees mostly) in 32 locations across the world. The participants fill that they are part of a global project. Employees join for a day, and then collect data every 3 months of a local freshwater locations (e.g. ponds) – they want people to be engaged and collect data, but also to think of water footprint in daily life. The data become part of local and global data – there are some aims to make the data widely available and use for academic publications and local management plans. EarthWatch evaluates how well people learn from the day, and their commitment to the project. They look at how people participate over time – including longer ‘stories’ from participants to see their journey. There are also signs of behaviour change outside work. The benefits of the partnership – funding global research programme, unique dataset, and personal and corporate outcomes. There are challenges in quality control, multi cultural aspects and continuity (only 21% continue to contribute data). The sustainability leadership citizen science programme is 5 days immersive programme for senior staff – 1000 senior manages, with 12,000 hours of data collection and learn about climate change and how it is relevant to their work. The hope is that participants will be integrating an understanding of climate change into their work. For participants is help to connect to nature, and change their perspective on life. Working with scientists is a key to increase knowledge and awareness, and changes in behaviour personally and at corporate levels. Majority of participants led to development and implementation of sustainability strategy – reducing energy, waste, use of renewable energy etc. The challenge that they identified is how to support actions in the workplace, and they created an online community of practice to support such change. Corporate projects can be immersive and aimed at senior staff, or higher volume but less engagement. Face to face training is a key to commitment. Training does not always translate into action – so they are looking at the barriers and identifying the factors that will help making a better change. They also want to understand and measure the wider and longer term outcomes.
Local people count: Using citizen scientists to monitor fruit bat populations (Tammy Mildenstein, Cornell College) covered citizen science in the Philippines, and how data can be used. For her, citizen science has multiple goals – build capacity, increase awareness, local knowledge can help in improving programme. In the Philippines people are involved in conservation research from limited engagement to higher level of monitoring – but only rarely in analysis. Depending on the question, can we trust citizen science data? Her case is about Flying Foxes which are largest bats, they are threatened and Old World fruit bats are not covered well in the literature. Monitoring give baseline trend information, and also identify conservation priorities, and feedback to conservation management. It also provide community-based harvest regulations (people hunt the bats) and local monitor provide the ability to manage the population. We tend to monitor to identify population trends – the power to detect is based on the population size, but we can deal with survey effort and survey error. So trying to increase survey effort and reduce the error. From 20 years of data, they compared the survey data that was gathered, and the error was calculated as difference in mean count among groups of observers, and they also compared different levels of skills – from biologists, bat hunters, to inexperienced helpers. Anyone above 4 surveys is classified as expert, forest works (hunters of workers) and everyone else. The error among experts – 2.9% error, forest workers 7.0% error and for untrained counters – almost 30% error. In terms of identifying the impact of the effort, they simulated a trend, and then simulated count assuming that they can get with these errors from different participants. Looking at the untrained participants – they realised that over longer period of monitoring, there is no error in trend detection for more experienced monitors. Conclusions: error rates to not affect much trend detection – citizen scientists help in increasing survey effort (more frequent monitoring and spatially too).

Essential Biodiversity Variables – and the emerging role of citizen science Mark Chandler (EarthWatch) – the challenge is how we aggregate data to understanding regional trends – but a lot of biodiversity data is limited, patchy, hidden data, limited capacity within nations to maintain programme, limited integration, and weak links between data collectors and policy-makers. The GEO-BON aim is to meet user needs (e.g. REDD or CBD), and they suggest the creation of essential biodiversity variables, similar to the climate change variables that are used by IPCC. It’s top down effort, the can work with bottom-up with national and regional capacity building. There are suggested 6 classes (Pereira et al, science, 2012) and the challenge is to mix remote sensing data and citizen science data to get detailed information. They identified that many of the gaps in variables can be helped by citizen science. They can consider participatory research: community based monitoring, crowdsourced citizen science (iNaturalist) and intensive research question driven (EarthWatch projects). The recommendations include that there is a need for build capacity to carry out citizen science projects, and large scale platform that will support data movement from local projects to global platforms such as GBIF. Citizen Science can contribute to monitoring protected area management – from park staff to outside visitors. The key is how to make data discoverable and shared. He demonstrated from Montane Meadows of the Sierra Nevada, where there is significant funding to restore wetlands, with only 1% of meadows studied – and remote sensing don’t give enough information, so the opportunity is to encourage people who like to visit the place to collect standardised data. EarthWatch help in developing a programme with 6 other organisations on this.

Several short talks followed:

What motivates citizens to take part in the management of an invasive non-native species? the case of tree mallow control on the islands of the Firth of Forth, Scotland – Marie Pagès, University of Aberdeen – she looks at volunteers who are involved in dealing with invasive species. Important to understand motivation and keep people motivated. She studies an Island, and there are plants that threatens the nests of puffins, a project worked well to control the plants. The survey showed that initial involvement was a combination of interest in the environment and having a nice day outdoors in an interesting place. On going motives include seeing progress and experiencing learning about nature, but the social dimension was critical – being with like minded people, interacting with project leaders. The implications for volunteering – the meaning and attachment to place are important to engage volunteer and maintain engagement. There is also importance in social aspect and being in nature (e.g. places that are inaccessible)

Another short talk Understanding the motivations and satisfactions of volunteers to improve the effectiveness of citizen science programs – Dale Wright, Birdlife South Africa – understanding the people who make monitoring  possible. Ornithology have a long history of engaging the public, and they create a project of create a bird atlas, and created psychometric instrument and with 75 questions, with looking at motivation and satisfaction, but also understand ‘ambassador potential’, with different tests. Used environmental volunteer functions inventory (EVFI) and modified it – the volunteer want to link to nature, they want to contribute to nature conservation, they wanted to see personal development. They put participants in the centre of the logic model and working around them. They developed evaluation programme and have some. Results of research are shared back with participants

The next short talk covered Citizen science in rural Africa: The conservation and monitoring of a threatened carnivore by Maasai hunters – Stephanie Dolrenry, Lion Guardians – she talks about working in rural SA. They realised that lions are hard to study – found more dead lions that live lions. They engaged with the warriors in the Maasai who are many times killing. They asked them to collect data that they use and the participants are illiterate, and taught them to collect the data by telemetry and GPS. The worriers helped in many ways. The data that came out of the monitoring, they have the same number of researchers, but now they cover 4000 sq km with the help of the citizen scientists. They discovered much more as a result of the work, and tripled the populations, the warriors took ownership over the lions and there are societal, social and conservation outcomes – many people can name a lion and they relate to them. Number of lions decreased – the reporting is 90% accurate. (in a paper Dolreny Hazzah at all ‘citizen Science in Africa). Engaging the  worriers in the process of tracking lions, giving them skills, providing job opportunities and prestige from using telemetry and GPS. They are paid to be guardians – once they show the opportunity, they are being compensated. They do get the participants together and report back, discussing what was seen and how to understand the outcomes.

Nature in your backyard – Citizen science in gardens – Silvia Winter, University of Natural Resources And Life Sciences Vienna – she looks at citizen science in urban garden which are under-studied habitat but difficult to access. There are a lot of people living in cities and gardens cover large surface (8% in Vienna). They had an aim of recording biodiversity of target species – bees, butterflies, garden birds and hedgehogs . They carried out work with 16 schools wand 428 pupils, with 309 garden interview about management and structures, and they got 132 gardens that are being monitored. They have tracking tunnels to hedgehog that can be checked after 5 days for footprints. Information is then shared online in a specific site. They found hedgehogs tracks in 54% of the garden. igelimgarten.boku.ac.at

The first long talk of the second part was Promotion of biodiversity in agricultural landscape via umbrella bird species, agri-envi scheme and citizen science project: Lessons from central European country  – (Vojtěch Kubelka, Charles University, Prague) – combining biodiversity in agriculture. In agricultural spaces, there is intensification and reduction in birds, butterflies and other species. Agri-envi schemes promote biodiversity, and they wanted to promote umbrella bird species (Northern Lepwing). They designed model of data collection that involve citizen scientists who are interested in birds together with farmers’ involvement. After 2 years of monitoring, they have online system for collecting observations from the field. The project is running since 2012 and  increased to over 3500 observation, from 1248 localities with 222 observers, and because the birds nest in arable land, they designed suitable agri-envi schemes in suitable areas, with 11,420 ha. The potential of the project is great – the project was successful and the selection of the umbrella/flagship species was successful, with a promising agri-envi project design.

Predicting impacts of forest management and climate change on dead-wood dependent fungi distributions using citizen science data and a range of modeling approaches – (Louise Mair, Swedish University of Agricultural Sciences) – overall aim  is compare future viabilities of species among different forest management scenarios, considering both use of forest and the biodiversity. The evaluate – looking at specific fungi that relate to the age of the forest. The recording is presence only, and there is a need to understand distribution of fungi and the behaviour of volunteers. They evaluate how good are the observers and some of them are very good. She modelled the data with different approaches: GLM, MAxEnt, PA/PO and Occupancy models – using environmental covariants, species observation data (presence-absence from recorders who are especially good, and presence-only in other cases, and background data, as well as bias layers – including information about where population are. Using different models mean that they agree that there is different between two scenario, but big disagreement in the scale or the rate of change. Models are producing different projections.

Can citizen science yield conservation outcomes? A framework describing pathways to conservation – (Tina Phillips, Cornell Lab Of Ornithology) looking at conservation within large scale citizen science projects. The field of citizen science grown with lots of special issues, conferences etc. There is a need to measure/document outcomes and impacts – scientific outcomes are clear, education outcomes are also clear, but what about the conservation outcomes? The problem is that it’s about site management-> conservation outcomes – but this is not how large scale projects work. So citizen science work indirectly – supporting research, education, policy or local community activity. They look at the theory of change from Shirk et al 2012 which is a logic model that expects outputs, outcomes and impacts. A results chain is a way to describe the conservation goals – understanding strategies and outcomes. The are causal, and you work backwards from the results – using an if-then – it also help identify intermediate outputs & outcomes, and linked outcomes and activities to explain how and why thins happen. The COASST project is a large scale survey of dead birds on coasts – the goal is improved bird population, the can articulate, the intermediate stages: e.g. if training material is provided, and if sufficient volunteers trained and if data collected by volunteers then you got the data, but if it’s not validated and accepted, the chain can fall apart – it’s useful to articulate that. Similar analysis was carried out for the Monarch Larva Monitoring project and they developed pathways that led to a success with people managing the project. in eBird who is successful in use for different projects. In eBird, Cornel Lab evaluated what people do with the data – they identified site and habitat management, habitat protection and even for law & policy and species management – they had 150 answers of users of the data. There are many pathways to conservation, and need to align goals to outcomes and activities. Data quality is paramount. In the implementation there is a need for data feedback, ensure data transparency, and local stakeholder involvement. – finally, need to have useful evaluation as it goes along.  Many of the processes are not linear and are more messy in reality. Pathways are messy in reality

Another set of short presentation started with Online conservation – Updated spatial information on threatened plant species in Israel – (Ofer Steinitz, Israel Nature And Parks Authority) information about threatened species – they have red book for Israeli plant from 2007 and 2011, but they need to monitor it dynamically, They developed an interactive website that encourage people to collect information about the plants and can do different things with the information such as query it and visualise maps. People share information from smartphones, or images that they collected, and there are experts vetting of the information. Absences can also be reported in the system. Since the launched in may 2015, they received 74 new observations of 56 threatened species, and including a rare  observation. There are 3000 observations that are explored by experts , and they see themselves progressing Red Book update online, with a process of expert reviews (redlist.parks.org.il)

Citizen science projects in environmental NGOs – Bridging the gap between scientific standards and civil engagement – Eick von Ruschkowski, Naturschutzbund Deutschland (Nabu – Nature And Biodiversity Conversation Union) looks at process beyond citizen science – argue that it’s controversial buzzword – seeing citizen science as second-rate science or providing cheap field research assistants. These are not correct – debate in Germany about the quality of the data, and there are varying levels of citizen science in terms of how people should be engaged. There are questions about questions on motivations and many projects are at local and regional in which there is a lack of research funding, and we need to identify the link between age, taxonomic knowledge, and digital affinity. We need to look at citizen science from NGO perspective – have mutual respect for professional and non-professional scientists. and managing volunteers but require. Also users want to have control over the data, and need to agree about data ownership and how it is used.

Those two session followed by a lunchtime workshop on the role of citizen sciences and collaborative research in conserving bio cultural diversity (Sylvie Blangy), the session on how to improve citizens-researchers dialogues. They explores 5 projects: Orchidees sauvages de Frances, Observatoire des Saisons, Sauvages de marue, Aborinet and Ewé relict forest . The session was organised by GDR Parcs working on Participatory Action Research and citizen science – trying to involve citizens in alls stages of the research project. Camila Leandro from Tela Botanica – NGOs in Montpellier to reunite French-speaking botanists. Having a network that everyone can create projects and develop them – data is shared openly. They have a team of 10 employees – coordiantors and IT experts, and work with a wider network. They have FloraData and eVeg that allow recording, and an online book – Carnet En Ligne, identification by peers – IndetiPlante and coordinate citizen science projects. Sauvages de ma rue is a way to study urban plant, through easy process of data collection. Observatoire Des Saisons started in 2006 and is about phenology, with more than 10,000 records, and they linking news about climate change and biodiversity laws etc. They also provide visualisation tools to explore the data. They have a system of newsletters – to update people on what is happening with the data (Lettre de Printemps) . Philippe Feldmann presented Orchisauvage – monitoring wild orchid in France. 3000 people involved. Website was developed to support an NGO – it’s a new approach on interactions between science and society. Website open to people from all walks of life and also include mobile application – the data is validated bu experts. There are many tools – including ability to export maps by participants with limited knowledge. Since Feb 2014 They got 150,000 records, with 1500 registered users, with 14K images. There is high level of commitment of observers. Alfred Houngnon from Benin showed how to involved local communities in collaborative research. Benin is part of the Dahomey Gap hotspot – which is biodiversity hotspot Ewé relict forest is emblematic in Benin and is recognised as important. The forest area is shrinking from 571 ha in 1987 to 364 in 2007. Created farmer field school and sharing with communities in the area. The facilitators are native to of Ewé and the aim to developed common view on the goals of the project – the project is deliberately bottom-up. They identified over 250 plants that are unknown in Benin and contributed to conservation. The final case looked at community based indigenous tourism, starting in 2006, with 2000 copies sold – and then moving to collaborative website, that connect the communities, exchange lessons learned, share information and improve the location as destinations. Aboriginal-ecotourism website created the network.

The workshop included an exercises and identify problem that you seen in presentation (yes, but) and also positive comments (Yes, and) to the different activities, and trying to improve the systems that are used in citizen science.

A Symposium Power to the People? Valuing and Integrating Local Perspectives in Conservation the session was organised by Emily Woodhouse, and it included early career researchers. A core question was how to address inequalities in conservation projects and engage and listen to local voices.

Conservationist vs local voices: Telling a story of conservation and conflict from different perspectives Jevgeniy Bluwstein, Copenhagen University – is doing work in Tanzania, in the Kakoi – village land between two protected areas, wildlife management area. There is a conflict between private investor in wildlife sanctuary and the community who want to access the land for cattle grazing. Issues in the community about conservation include: land ownership, local participation in decision making and others. The conflict started with investor with a contract in the community based organisation that created a contract and think that the contract set the relationship, but negotiation by representative is not enough – as they wouldn’t agree to the contract if they knew that they would not be allow to graze. There is also issues with who’s land it is – investor think that it’s not part of Kako, while the local people see it as their own land, and the investor need to come and as their permission. The investor is treated as external and without rights. There is also issue of participation – the investor consider negotiation with CBO as enough, and claim that ‘the whole community support the land-use concept’ as he consider talking to a representative body. Villagers don’t feel that they were part of the decision. In terms of environmental protection, investor consider it as need to rehabilitate this path and see grazing cattle as misuse, while local people want to use it sustainability and need it in the dry season. more fundamental disagreement about what grazing is – sustainable practice or not? In terms of rural development – the investor argue that he bring tourism and that will create income locally, and support local school, while the local people consider it that they don’t benefit and think that support goes to other villages and not to them. So we have value system that then lead to concept and these are based on facts, but there is disagreement about how to understand these facts.

Understanding locally defined human well-being to measure impacts of conservation projects on the northern plains of Cambodia – Emilie Beauchamp (Imperial College London), there are linkages between wellbeing and conservation. Conservation need to understand wellbeing perception and contextualise the indicators. Measuring wellbeing is difficult. To define wellbeing, she looked at existing studies – ‘voices of the poor’ and work from the university of Bath on that. Five elements – physical environment, human capital resources, social resources and relationships, security, and autonomy and freedom of choice. The Material and natural resources and human resources are being address in conservation, but the other are not being dealt with. In her work in North Cambodia, looking at 3 villages who are impacted by land concessions. Carried out in-depth qualitative interviews with 56 people exploring ‘what does it mean to have a good life’, and evaluated cultural salience assuming that the earlier things that are mentioned are higher priority. Natural and human assets top of the list (top is agricultural land), with relationships also being important. If there is activity that influence fairness, risk of loosing land influence the  view of wellbeing. Conservation projects need to be sensitive to land and natural resources, and changes can have high impact on wellbeing.

Communities count: The role of local people in ecological monitoring – Samantha Earle (Imperial College London) she talked about one specific approach. Monitoring is a way to measure the state of the system so it can be used in decision making. Many stakeholders, including local people. Monitoring involving local people is seen as a way to allow local people to deal with management decisions. Involving local people improve their understanding as well as integrating their traditional knowledge, and potentially economic benefits. She reviewed 42 papers on participatory monitoring, and among them there are many feasibility studies and a whole range of goals and objectives. The things that are measures – anthropogenic activities (e.g. logging), information about species but also food security etc. The common data collection are line transects but also catch data but many other ways. The approach is seen cost effective, frequent data collection, capture and enhance local traditional ecological knowledge and build capacity and empowerment, and create awareness. Limitations include issues with quality control issues, create internal conflicts with unequal distribution of benefits, there is also concern of information misuse, and there is reliance on external support. There are social impacts – justice and ethics: the right to have a say, and get benefits, as well as understanding the projects. Only few papers discuss the impact of the project on local people and even that is only in anecdotal way. There is an evidence of increased awareness within the community, promote community discussions, and individual and community empowerment. What are the durable long-term methods? which ways should we measure social impacts? how to maintain motivation and enthusiasm?

Some points from the panel discussion included that if local community cannot see the reason to carry out conservation, they will not participate and there is no value in pursuing it.

An afternoon symposium was dedicated to Creating Natural Connections In Unnatural Habitats Through Citizen Science and included:

Citizen science as a potential tool to prevent the extinction of experience – (Assaf Shwartz, Technion) – there is biodiversity crisis is a result of human action, and the solution largely depend on actions of individuals, and public support to encourage government to act. Part of the arguments is the diversity message is too complex – so difficult to link it to action (Swartz et al 2012). The literature also suggest that conservation biology don’t use the right langauge (hence the ecosystem services framework). But there is the concept of ‘extinction of experience’ – lack of experience of nature in urban areas and missing these experiences when living in cities. Measuring this issue is complex and he is using the natural environment link it to affective and cognitive results. Currently using a survey in an urban and rural area. Using the framework of Nisebet et al. 2009 and checked how many common species people identify. People in rural area are more connected to nature, but in terms of correct answers, more correct ones in a rural area, but how many people know – the urban people claim to know more species. To enhance biodiversity experience in different places – it’s easy to increase biodiversity in urban areas in garden through simple interventions (Shwartz et al. 2014). The social side – there was no difference in people perceptions of the biodiversity. Urban dwellers are less connected to nature and there are health and wellbeing benefits to such connection which is missing. The challenge is to prevent the extinction of experience – and increasing biodiversity of urban areas is not enough, so need to increase positive interactions between people and biodiversity, and people don’t notice it. Citizen science is a method to increasing knowledge and he is exploring how participating in citizen science change attitude. The worked with 316 fourth-grade children, and then compared class to citizen science activities. The post intervention results, the was increased ability to recognise bird, but increase in attitude.

What’s in your backyard? Citizen science camera trapping as a lens to study mammal diversity in classrooms – Stephanie Schuttler- children have all sort of natural connection that lead to interest in science, but this is difficult in urban environments. The student discover project there are lesson plans that use camera trap (eMammal ) programme, they are being used as a way to record different animals that are describing the place – and it’s an evidence. Users upload the images on a specially designed website, the photos are reviewed by experts – to ensure that classification is correct. Projects are now increasing in urban areas, and backyards of people are important habitats, and these places were ignored by scientists traditionally. The teachers are trained to be experts in the field, and that helped in increasing confidence, and the teachers have a lot of fun – but also once they know that it’s for real science, they take it seriously. They noticed that coyotes are becoming more urban, and they see that in the Raleigh area. They are now setting a wider programme – eMammal International in India – showing that roads that are used in the day are also used by tigers at night. There is increase knowledge of natural history, and they found that they created advocates

Take back the block: An urban citizen science program – (Amanda Sorenson, Rutgers), with Rebecca Jordan, working on socio-ecological systems. The aim is to increase resilience and build capacity in the community to respond to changes. A resilient community can keep critical functions in times of uncertainty, and they can monitor, cope and adapt and thrive, So they create ‘collaborativescience.org’ a website that enable citizen scientists to join, and do place-based work, while having access to resources at other levels. An example of that is the Virgina Master Naturalists – individuals with capacity for monitoring and advocacy – retired, wealthy and using it for conservation, and focus on things such as stream protection. They recruited a scientists, set a monitoring programme, to check for sources of pollution, checking in different results, and they secure $200K to improve a stream bank remediation. Another platform is Mosquito Stoppers in highly West-Baltimore, in a non-engaged community in terms of local decision making and also underserved. They worked in areas that have low socio-economic status and explored is unmanaged container habitats support greater mosquito production? they develop a programme to monitor change over time and using adaptive strategies to remote the trash, and developing capacity for action and choice. The citizen science is an opportunity to provide voice and agency – they worked with a whole range or participants, with 74% said that they are bothered by the mosquito every day and 60% reported changing behaviour to avoid being bitten. After participation in citizen science, the participants believe that personal and community actions will have broader impact. We see in citizen science change in agency. Citizen science have a role in agency, epistemic practice etc.

 Short and long term consequences of urban citizen-science projects to individual connection to nature – (Anne-Caroline Prevot, CNRS and the NHM), concerned with extinctions of experience and environmental generational amnesia (Kahn 2002). using the (Stren 200 J Social Issues) model of pro-environmental behaviour. The theory of planned behaviour accept habit and routine as major factors. Citizen science can be used to improve connection to nature – biodiversity representation, environmental values, in-group social identity, practical knowledge and habits and routine. She use data from the programme vigie-Nature programme – a questionnaire to the volunteers and 1723 responses, 30 in-depth interview using anthropological approach (Cosquer et al 2012 Ecol Society) and finally working with 400 pupils using questionnaire and drawing in school. Experts said that they didn’t learn much, but the volunteers said that they learned before. In the voluntary butterflies monitoring, they volunteered because of confidence in science and the museum, but showed high interest and knowledge of butterflies, change gardening practices. Within the school programme, they ask students to draw urban garden they would dream of in checked for environmental values and outdoor activities. Used drawing to count natural elements , human presence and built elements. In the study they show that participation in citizen science, make nature more present in drawing, as long as they had outdoor extra school activities. In the short term there was no change in environmental values.


Notes from ICCB/ECCB 2015 (Day 2) – Citizen Science data quality

DSC_0073
Posters session at ICCB/ECCB2015

The second day of the ICCB/ECCB 2015 started with a session that focused on the use and interpretation of citizen science data. The  Symposium Citizen Science in Conservation Science: the new paths, from data collection to data interpretation was organised by Nick Isaac and included the following talks:

Bias, information, signal and noise in citizen science data – Nick Isaac – information content of a dataset is question dependent on what was captured and how, as well as survey effort. Data is coming in different ways from a range of people who collect them for different purposes. Biological records are unstructured – they don’t address a specific question and need to know how they come about – information about the data collection protocols is important to make sense of the data. If you are collecting data through citizen science, remember that data will outlive the porject, so need good metadata, and data standards to ensure that it can be used by others. There are powerful statistical tools and we should use to model the bias and not try to avoid it, and little bit of metadata would go a long way so worth recording it.

Conservation management prioritization with citizen science data and species abundance models – Alison Johnston (BTO/Cornell Lab of Ornithology) distribution of species are dynamic and they change by seasons. This is especially important for migratory birds – conservation at specific times (wintering, breading or migrating). The BirdReturns programme in California is a way to flood rice field to provide water-birds habitat, and is an effective and not hugely costly. However, dynamic conservation need precision in information. Citizen Science data can help in occurrence model and want to identify abundance as this will help to prioritise the activities. They used eBird data. In California there are 230,000 checklists but there are biases in the data. There are variable efforts and expertise, and bias in sites, seasons, time. There are also different relationships with habitat, it is also difficult to identify the extreme abundance. They used the Spatio-Temporal Exploratory Models (STEM) which allow modelling with random grids – averaging across cells that have different origins (Fink et al 2010 Ecological Applications). Using the model, they identified areas of high activities – especially the abundance model. Of the two models, the abundance model seem more suitable in using citizen science data for dynamic conservation. The results were used with reverse auction to maximise the use of the available funds to provide large areas of temporary wetland.

Citizen sciences for monitoring biodiversity in habitat structured spaces – Camille Coron (Paris Sud)  described a model estimate for several species and their abundances – they wanted to use several datasets that are at different types of protocols from citizen science projects. Some with strong protocols and some without. They assume that space is covered wtih different types of habitat, but the habitat itself is not known. They look at bird species in Aquitaine – 34 species. 2 datasets are from precise protocols and the third dataset is oportunistics. They developed a statistical model to allow to estimate the data, using a detection probability, abundance, and the intensity of the observation activity. In opportunistic dataset the effort is not known. The model have important gains when species are rare, secondly when the considered species in hardly detected in the data and when there are many species. By using the combined robust protocol projects, the estimation of species distribution is improved.

Can opportunistic occurrence records improve the large-scale estimation of abundance trends? – Joern Pagel – there is lack of comprehensive data large scale variation in abundance and he describe a model that deal with it. The model is based on the assumption that population density is a main driver of variation in species detectability. Using UK butterfly data they tested the model, combining the very details local transects (140 with weekly monitoring) with opportunistic presence recording (over 500K records) using 10×10 km grid. The transects were used to estimate the abundance (described in a paper in methods in ecology and evolution). They found that opportunistic occurrences records can carry a signal of population density but need to be careful about assumptions and there are high uncertainties that are associated with it.

When do occupancy models produce reliable inferences from opportunistic data?– Arco Van Strien (statistics Netherlands) Statistics Netherlands are involved in butterflies and dragonflies monitoring – from transects and also opportunistic data. opportunistic data – unstandardised data, and can see artificial trends if effort varies over time – so the idea was to changes in recorder efforts derived from occupancy models. They coupled two logistic regression models – modelling the ecological process and the observation process. They wanted to explore the usefulness of opportunistic data & occupancy models, and used a Bayesian model, evaluating the results against standardised data. They looked for inferences – phenology (trying to find the pick date in detection), national trend in distribution, species richness per site, local trends in distribution.  The peak date- found a 0.9 correlation between opportunistic data and standardised data. National trends – there is also strong correlation – 0.8/0.9. Species richness – also correlation of over 0.9, but in local trends, the correlation is dropping to 0.4-0.5 for both butterfly and dragonfly. the conclusion – opportunistic data is great and need to be careful about the inference from it.

Making sense of citizen science data: A review of methods – Olivier Gimenez (CNRS) – interest in large terrestrial and marine mammals, they are difficult to monitor in the field and thinking of citizen science data can be used for that. Looked at all the papers with citizen science, and looked as specifically those that look at the data. Wanted to build taxonomy of methods that are used to handle citizen science data. He identified five methods. First, filtering and correction approach – so know or assume to know bias and trying to correct it – e.g. list length analysis. They are highly sensitive to specific biases. The second category – simulation approach, simulate the bias and check how your favourite method behaves given this bias. Third approach is a regression approach – use relevant variables to account for biases -e.g. ecological variables that used to build and predict models, and then use observer bias variables – e.g. distance from cities. The fourth approach is combination approach – combine citizen science data with data from standard protocol to allow to understand and correct the data. The last approach is the occupancy approach – correction for false-negatives and time/spatial variation in detection, so it can be used also extended to deal with false-positives and and also to deal with multiple species. Conclusion: we should focus more on citizens, to describe the models – we need to understand more about them (e.g. record data and the people that collected it) and social science have a major role to play.

 

In the session paths for the future: building conservation leadership capacity, Kirithi Karanth (Wildlife Conservation Society) looked at ‘Citizen Scientists as agents for conservation‘. In the 1980s WCS started monitoring tigers and some people who are not trained scientists wanted to join in. What draw in people was interest in tigers, and that was the start of their citizen science journey. 5000 km walked in 229 transects in the forest. It started with ecological survey across entire regions from charismatic species but also to rare species. Current project projects have 40-50 volunteer in amphibian and bird survey outside protected areas. The volunteers identify rare species. As project grown, so the challenges – e.g. around human-wildlife conflicts and that helped in having over 5000 villages and 7000 households surveyed in their area. Through the fieldwork, people understand conservation better. Another project recruited 75 volunteer to document tourism impact and the result were used by decision in the supreme court on how to regulate tourism. The have over 5000 citizen scientists, with active group of 1000 at each moment. The impact over 30 years – over 10,00 surveys in 15 states in India, with over 250 academic publications and 300 popular articles. A lot of the people who volunteers evolved into educators, film-makers, conservationists, and also share information blogs, articles, films, activists, and academics. The recognition also increase in graduate programmes – with professional masters programmes. Some of the volunteers – 10% become fully committed to conservation, but the other 90% are critical to wider engagement in society.

 


The privatisation of cities’ public spaces is escalating. It is time to take a … – The Guardian


The Guardian

The privatisation of cities' public spaces is escalating. It is time to take a ...
The Guardian
Oliver Dawkins, doing degree work at UCL's Centre for Advanced Spatial Analysis (Casa), created a “Pops Profiler” that enables us to see how this plays out in the Square Mile. Publicly available data on Open Street Maps (OSM) do not indicate a single ...

and more »

Notes from ICCB/ECCB 2015 – Traditional ecological knowledge, Conservation 3.0 & citizen science

These are my notes from the first day of the International Congress on Conservation Biology (ICCB) & the European Congress on Conservation Biology (ECCB) in Montpellier.

I’ve took notes from some of the talks in 3 sessions about traditional knowledge, ‘Conservation 3.0′ and the citizen science posters.

In the session on Traditional Knowledge and Conservation noteworthy talks include:

The role of tribal colleges in preserving traditional ecological knowledge and biocultural diversity – Teresa Newberry (Tohono O’Odham Tribal College in the US), the tribal colleges and universities (TCUs) in the US represent diverse communities and cultures. The Tribal Colleges has a mission to preserve the culture of the local nation, and engaged with their community, thus TEK is part of the education in them. Language is critical to understanding biodiversity: indigenous groups speak about 85% of world’s languages and take care of 80% of the World’s biodiversity (Nelson 2015) so it is important. There is a link between biodiversity and language diversity. Local languages encode local knowledge and they specifically adapted to their local area. 40% of the languages are in risk of disappearing and therefore this loss is monumental amount of TEK. Looking more closely, language encodes worldviews and traditional knowledge systems – it’s evolution of one group of people in a specific place, and also encode practices and rules. It includes many layers of meaning and relationships between living things. For example, in the Tohono O’Odham language there is a term that make you notice that you don’t collect the flower until a hummingbird collect the nectar – and it is included in the way you talk about local ecology. Teresa developed a local calendar that helps linking phenology to specific language and events. Another tool that she developed is the TOCC Plant Atlas – linking plants with audio that state the traditional name in addition to write it. There are multiple values in traditional knowledge: unique multi-contextual perpectives, time-tested adaptation and mitigation strategies to environmental change and deep, local knowledge of place.

 ‘Manngem Thapnee’: The crocodile worship ritual of an agrarian community of Indian state of Goa, and its conservation context  – Manoj Borkar  (Goa University) – Goa is coastal and crocodiles are protected by the Red List of IUCN. The current trends is for the crocodiles population to increase and they have groups in swamps and some in fresh water areas. There are also tourism activities to see the crocodiles. The contemporary scenario – pressure of overexploitation of sand from riverbed, use of canals for shipping, unregulated backwater tourism, and fishing are making it difficult to protect the population. During the Portuguese control of Goa (450 years ago) there are reports on abundance of crocodiles. Crocodiles are viewed as demonic and also as divine status. Within the indigenous tribal culture there is a crocodile worship ritual in which they create a crocodile from clay and they want to appease the crocodile to avoid inundation of fields by water (the crocodile seen as the link to water sources) – the practice is going in December. The veneration is translated to protecting the crocodiles and can be seen as an example of integrating local practices in conservation.

Augmenting survey data with community knowledge to inform a recovery strategy for an endangered species in Canada: Identifying important areas of habitat for Peary caribou – Cheryl Johnson (Environment Canada). The aim is to develop a recovery strategy for the caribou – to maintain healthy species distribution and keep their area – the are very wide ranging area species – migrating over hundreds of kilometres. The process started with identifying locations, then the amount that need to be protected, and then the very specific type of the habitat. This mean working at different scales. They collected survey information from scientists and integrated it with information from local communities of where they’ve seen the caribou. Once they’ve identified 3 main seasons in the migration, they integrated it into their spatial model. When comparing the information from survey information compared to community information – the community had much more holistic and complete view of where they’ve seen the animals. The modelling process include consulting with both scientific experts and community members with knowledge of the caribou and that helped in identifying the most relevant model. The TEK was crucial in eliminating spatial and temporal biases in survey data by scientists.

The session Conservation 3.0 was open with Alex Dehgan explaining what it is about: technology, behavioural interventions and financial innovations are changing conservations. The field of conservation biology, after 30 years, there is increase in areas that are protected, but there are very high extinction rates, and we still have major challenges. The population growth will require 70% more food and the intensity of agriculture, especially with increase in meat consumption. Wildlife trade increase and we don’t have enough financial resources. Conservation biology is sometime technophobic, but how can we used opportunities to deal with issues? Maybe we should learn from other areas – e.g. the change from ‘tropical medicine’ to ‘global health’ – by increasing the tent to more people involved from more areas of research. We can have conservation technology & engineering. 3D printing to cellphones, we can consider the connected conservation and the used of multiple sensors, or use synthetic biology. There is also need to consider how to use ideas from behaviour change, marketing & conservation – altruism doesn’t work, only as last resort. Financial innovations – maybe environmental impact bonds, conservation finance and other tools. Think of design under constraint just like with iPhone. We can also consider crowdfunding – $16.2 billion – compared to NSF total budget of $5.8 billion. There are other ways to harness the crowd- from ideas, to creativity, to funding.

Paul Bunje – XPRIZE Foundation, considering the incentivizing innovation for conservation. Problems are increasing exponentially and solution are only increasing in a linear way and try try to find solution at huge scale. Open innovation takes lots of ideas internally and externally, and trying to find tools from all sort of areas. There are also new opportunities for identifying new sources of funding. The benefits of prizes/challenges – solve important problems, set aspirational goals – a moonshot, novel partnerships, inspire with new ideas. There are all sort of methods in open innovation, from incentive prizes or just innovation networks. Prizes continue to increase – flexibility, openness, but also the new ways in which stories are being done.

Asher Jay – creative conservationist. She explore the linkage between science and stories. Humanize science – not introducing a bias, but need the link those in the know and other poeple. Content need to be contagious, and enable the individual – making the individual impact about conservation. Looking at facts and figures, and then thinking how the story evolve – what is the point, how to create protagonist/focus, which elements will be included, emotional triggers – need to think about consuming the science and then acting on it. That can be done through using existing signs, symbols, icons. There is also the issue of foreground and background to help structure the understanding. A lot of the campaign that she created are about ‘stating the obvious’ that people as they are not always aware of it. The design for the digital age is that they need to be shared – open source images mean that they are used in many ways (including tattoos).

Ted Schmidt – covered Paul Allen philanthropy through ‘Vulcan’ and trying to bridge technology and conservation science. Some of the focus areas includeillegal fishing, wildlife monitroing and management, but also wildlife surveys and database. They carried out a great elephent survey – flying over 20 countries to count elephents. The data is working with IUCN to ensure that the data live on. Shah Slebe suggested the idea of the ‘internet of Earth Things’ – ability to understand how things changes in real time. Technology is a tool that can help but there are no silver bullets. We need to have be aware not about the drone but what the data is used for. The SMART – spatial Monitroing and Reporting Tool created a tool to understand conservation areas. SMART is a good model to solve problems. Technology need to be designed for the context – need to show that it can be deployed over time and in a reliable area.

Lucas Joppa  – the impact that people have on the planet is the anthropocene and the information age – we have a combination of having 50 billion objects linked. Levereging information technology for conservation biology is seem obvious to those who are interested in technology areas. Empowering the crowd to collect information and identify (iNaturalist), or instant wild to work with camera traps, and GPS tags on the environment – animals also involved in sensing the environment for people. Mongabay – got a section on Wildtech area. Engaging with industry – there are different partnerships with technology industry and conservations – questions for help are backward – people don’t ask for the resources of working with the talented engineers that are part of the organisation. If asking in the right way, we can get donation of time and money from the engineers.

In the Poster session, there was a set of posters about citizen science, and some of the one that I’ve explored are

Understanding the environmental drivers of recording bias in citizen science data across Sweden Alejandro Ruete looking at biases in the data that was collected, and developing an ignorance index that let you evaluate how much you would know about a location.
Earning your stripes: Does expertise aid the ability to match bumblebee images in identification guides Gail Austen-Price compared the identification abilities of experts and non-experts, showing that the ability to match is good regardless of expertise, but that experts are more careful and are willing to say that it’s not clear how to differentiate.
 Utilizing citizen science and new technology to improve the Palau national bird monitoring program Heather Ketebengang showed how in Palau they’ve used information from experienced and trusted birdwatchers (through systems such as eBird) with experts’ survey to create a more comprehensive picture of their bird population.
Maximizing mangrove forest conservation through multi-scale stakeholder engagement in citizen science Jenny Cousins showed a long running project that have yield many benefits to all sides involved – including better local skills, academic publications and more.
The microverse citizen science project: Collaborative microbiology research with UK secondary schools Lucy Robinson describe the work of UK NHM work which I’ve covered in the ECSITE post.
Online participatory mapping of ecosystem services and land use preferences in the Polish tatras – experiences and challenges Barbara Peek describe an online PPGIS that ask people to identify values, positive and negative activities in an area of Poland. The project had it’s own participation inequality (2% of participants putting 25% of the information) and fairly few qualitative comments, but they were useful.
Population census of house martins in Switzerland: A web based citizen science project Stephanie Michler is an interesting project with species that people are already interested in and provided many artificial nest, so the level of engagement and activity in the project seem to be good. Within 3 years, the project presented good growth.
Dealing with observer bias when mapping species distributions using citizen science data: An example on brown bears in Greece Anne-Sophie Bonnet-Lebrun show that a model that takes only roads as a proxy for where people will collect information is not good enough, so there is a need to understand where are the tourist area.
Using citizen science to map geospatial and temporal trends in human-elephant conflict Cheli Cresswell show the progress in her app development to engage people in reporting on human-wildlife conflict.


Optimal Cities, Ideal Cities

wright

Ideal Cities, such as Frank Lloyd Wright’s Mile High Tower The Illinois (pictured here), and Le Corbusier’s City of Tomorrow have fallen out of fashion in recent years. But the rise of the smart city and the notion of the instrumented district together with our current concern for Future Cities is beginning to resurrect such theories. The current editorial (click here here) in Environment and Planning B deals with these ideas and inquires into the optimal size for such ideal cities. In fact the ideal city and its close economic comparator the optimal city are long-standing issues that I discuss in this short note with respect to questions of how we measure such optimality. There are two main approaches. The oldest is the visual approach – ideal cities tend to be geometric purities – as developed at their pinnacle during the Italian Renaissance while the second is concerned with optimal city size – optimal populations – that range from Plato’s 5040 to Ebenezer Howard’s ideal garden city of 30000 to Le Corbusier’s City of Tomorrow at 3 million. I deal with some of these issues in this editorial but the question of how all this relates to the smart city is something that I do not discuss. I will blog about it soon. I also need to note that the journal in which this editorial is published is now owned by Sage and the journal’s new web site with its contents including the editorial are here.

Urban Scaling Laws

clementine

Clementine Cottineau initiated our work on paradoxical interpretations of urban scaling laws using the example of the French city system. The paper is now on the arXiv  and you get it by clicking here or going direct to the arXiv. Scaling laws for cities are controversial and the furrow that we have ploughed here in CASA argues that the definition of the city system is all important in measuring the effect of scaling. If the definition changes, so does the scaling. Scaling is an intriguing concept and the notion that as cities grow they get more than proportionately richer has many policy implications which fly in the face of the small is beautiful movement that dominated city planning for most of the last century. Here Clementine Cottineau and colleagues have unpicked the city system in France, showing much the same that we derived for the UK city system or rather England and Wales which we reported in our interface paper: namely that evidence of superlinear scaling for income and other creative industries is volatile and ambiguous. There is much more we might and will say about this topic but a key issue that we are thinking hard about is what happens to inequality as cities get bigger.

The abstract from Clementine’s paper is as follows “Scaling laws are powerful summaries of the variations of urban attributes with city size. However, the validity of their universal meaning for cities is hampered by the observation that different scaling regimes can be encountered for the same territory, time and attribute, depending on the criteria used to delineate cities. The aim of this paper is to present new insights concerning this variation, coupled with a sensitivity analysis of urban scaling in France, for several socio-economic and infrastructural attributes from data collected exhaustively at the local level. The sensitivity analysis considers different aggregations of local units for which data are given by the Population Census. We produce a large variety of definitions of cities (approximatively 5000) by aggregating local Census units corresponding to the systematic combination of three definitional criteria: density, commuting flows and population cutoffs. We then measure the magnitude of scaling estimations and their sensitivity to city definitions for several urban indicators, showing for example that simple population cutoffs impact dramatically on the results obtained for a given system and attribute. Variations are interpreted with respect to the meaning of the attributes (socio-economic descriptors as well as infrastructure) and the urban definitions used (understood as the combination of the three criteria). Because of the Modifiable Areal Unit Problem (MAUP) and of the heterogeneous morphologies and social landscapes in the cities’ internal space, scaling estimations are subject to large variations, distorting many of the conclusions on which generative models are based. We conclude that examining scaling variations might be an opportunity to understand better the inner composition of cities with regard to their size, i.e. to link the scales of the city-system with the system of cities.”

Here again is the link to the ArXiv

 

UKDS Census Applications Conference

census1

I was in Manchester a couple of weeks ago for a UKDS conference on applications of the Census 2011 datasets that have been made available, through the ONS, NOMIS, UKDS and other organisations/projects. The conference was to celebrate the outputs and projects that have happened thus far, now that the Census itself is four years old and most of the main data releases have been made.

It was a good opportunity to present a talk on DataShine, which I made a little more technical than previously, focusing on the cartographical and technological decisions behind the design of the suite of websites.

I enjoyed an interesting talk by Dr Chris Gale, outlining graphically the processes behind creating the 2011 OAC geodemographic classification. Chris’s code, which was open sourced, was recently used by the ONS to create a local-authority level classification. There was also some discussion towards the end of the two-day meeting on the 2021 Census, in particular whether it will happen (it almost certainly well) and what it will be like (similar to 2011 but focused on online responses to cut costs).

All-focus

After the conference close I had time to look around MOSI (the Museum of Science and Industry) which is mainly incorporated around an old railyard, terminus of the world’s oldest passenger railway and containing the world’s oldest station (opened in 1830, closed to passengers in 1844). But I was most impressed by the collection of airplanes in the adjoining hangar (once a lovely old market building), which included a Kamakaze. I also had a quick look around the Whitworth Gallery extension which has been nominated for this year’s Stirling Prize.

census3

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Calibrating Cellular Automata (CA)

Liu-Paper

Urban CA models use sets of rules that are applied to each cell in the geographical array to change the state of the cell usually according to attributes that exist in the neighbourhood of the cell in question. As there are usually many thousands of cells and therefore large quantities of data that map one time slice of the system into the next one, then in CA models there is the opportunity to search for pattern in these changes of state, thus deriving transition rules from this data. Ten years or more ago, when Claudia Maria de Almeida from INPE (National Institute for Space Research, Brazil) visited us in CASA, I worked with her on her CA models of development change in Brazilian cities and she developed a number of multivariate methods for extracting the rules from the dynamics of cellular change. The 2003 paper can be downloaded here. Recently I have worked with Yan Liu from Brisbane (U Queensland) and Yongjiu Feng from Shanghai (College of Marine Sciences) on developing a machine learning approach to extracting nonlinear transition rules based on least squares support vector machines which essentially define the patterns needed get appropriate rules. It is all quite tricky stuff in detail but rather generic in terms of what these methods are designed to do. We published a paper recently on this in the journal Stochastic Environmental Research Risk Assessment (Volume 29, 2015, online) and if you click here you can see a copy of the paper and its source. Enjoy.

China: Fuzhou

fuzhou7

I spent a week in Fuzhou earlier in July, in China’s Fujian provice, presenting and attending a summer school and conference, respectively, at Fuzhou University. I’ve already blogged the conference itself (read it here) but during the week I got plenty of time, outside of the conference to get a feel for Fuzhou and this small part of China. Here are some notes:

fuzhou5

Bikesharing
There is a bikeshare system in Fuzhou, but it is small (by Chinese standards). I saw a few bikeshare docking stations during my trip, in particular one outside the university, which was complete with a (closed) booth for an attendant (I think this is where you get a smartcard to operate it). Each station has 10-20 docks, generally nearly full of the bright orange and green bikes, docked under a bus-stop-style shelter that also contains an alarm light, CCTV and loudspeaker, and red scrolling LED information screen. Adjacent there were typically 10-20 further bikes chained together, presumably for manual restocking by the attendant when they are there. The one thing I did not see, at any point during the trip, was anyone actually using the bikeshare bikes. The modal share of cycling is low anyway in Fuzhou (the roads are intimidating, but this doesn’t stop the swarms of electric bike users) but I wasn’t expecting to see a completely unused bikeshare system in a country so famous for the transport mode.

fuzhou4

Transport in General
Fuzhou is a city of nearly five million people – half the size of London. And yet it has no metro, tram or commuter rail (apart from a couple of stations right on the outskirts). So everyone travels by car, taxi (very cheap – £1 for most journeys), bus (10p per journey, air-conditioned and frequent), or electric bike. Probably 50% car, 15% bus, 30% electric bike, 5% taxi. Walking is not so popular as the roads are generally very wide and difficult to cross (you don’t generally get much space given to you at zebra crossing!) and likely because of the hot climate at this time of the year. The one mode that I saw extremely little of, is pedal cycling. I had heard that cycling has quickly become an “uncool” thing to do in China, it is interesting to contrast with the rapidly rising cycling use in London – albeit from a low base. London’s cycling mode share was also once much higher and also had a sharp fall – maybe London is just ahead of hte curve.

fuzhou3

Climate and Pollution
Fuzhou is a southern Chinese city. It’s around an hour’s drive in from the coast, where its airport is. It’s north of the many cities near Hong Kong – about 90 minutes on a plan from the latter – but south of Shanghai, and a long way south from Beijing. The climate is therefore quite hot and muggy at this time of year. As you might expect from a city of five million people where most people drive, a haze of pollution was often visible where I was there. However, the haze is not too bad. Fuzhou is helped in this by being surrounded on most sides by thickly forested mountains, which often rise up steeply, immediately beyond the city limits. One of these ranges indeed forms the Fuzhou National Forest Park which contains a wide variety of trees, including a 1000-year old tree with its elderly branches supported by concrete pillars! The masses of trees on all sides no doubt help with some soaking up of pollutants. Many of the large roads have lines of thickly foliaged trees running along them, and the bridges for pedestrian crossings, and highway flyovers, also have lines of shrubs and bushes all the way along them, which doubtless also help absorb pollutants and keep the haze under control. The street foliage also has the side effect of making many views of the city look quite pretty, with lines of green and purple plants softening the concrete structures and making the city seem to blend into the landscape.

fuzhou1

Urban Structure
Fuzhou is a city largely of apartment blocks. Strikingly, the centre of the city has virtually no construction going on – it is as dense it as needs to be, Fuzhou’s population does not need to increase, and the congestion need not get any worse. A few from the central hotel reveals almost no cranes, anywhere on the horizon, apart from some small ones for the aforementioned metro construction project. This is starkly different to the edges of the city, at the few gaps between the mountains, particularly along the road leading to the airport and the coast. There is a brand-new high-speed railway station at this edge of the city, and it also is the direction towards the shipbuilding and electronics industry factories that are a few miles distant. The area around the station is relatively free of apartment buildings, but huge numbers are currently being built, many 30-40 stories high and often built very close to each other, in clusters with distinct designs. The new station and the good road leading outwards it presumably the spur. This is infrastructure building, and developers responding to this, on a grand scale.

fuzhou6

Consumer Culture
One thing I noticed was that most of the Chinese attendees of the conference I was at had iPhone 6 phones. I’m not sure if this is representative of the Fuzhou population at large, but I was surprised to see no Huawei or Xiomai phones (both Chinese brands, i.e. home-grown). I have a Huawei myself – it is excellently built and I am very happy with it. Apple has done hugely well out of convincing people to pay thousands of extra yuan for the a phone with the Apple branding. Talking about luxury brands in general, Fuzhou has a cluster of these (Christian Dior etc) in a small mall in the centre, and also I spotted a Starbucks and McDonalds lurking nearby. But, Apple aside, in general western brands have little impact. And as for the popularity of the iPhone, the (official) Apple Stores have not made it to Fuzhou yet.

More generally, the food in China takes some getting used to, both the variety of produce and also the local varients. Lychee trees are everywhere (the region is where they were originally from) and there were plenty of other unusual fruits. The look of lychees takes some getting used to, but the taste is very pleasant. Fish features in a lot of dishes, as do various meets – the buffet and “lazy Susan” format though thankfully means the more mysterious items can be ignored! Our host also took us to an upscale restaurant where we had a lot of very spicy food (rare for the region) and also some weak but pleasant Chinese beers.

fuzhou2

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Urban Transfer Entropy across Scales

Murcio

Roberto Murcio led our work on applying ideas of information theory across scales so that mutual information can be transmitted one way, rather than symmetrically. The paper has just appeared in PLOS One. And you can Download the PDF from here. The abstract follows:

“The morphology of urban agglomeration is studied here in the context of information exchange between different spatio-temporal scales. Urban migration to and from cities is characterised as non-random and following non-random pathways. Cities are multidimensional non-linear phenomena, so understanding the relationships and connectivity between scales is important in determining how the interplay of local/regional urban policies may affect the distribution of urban settlements. In order to quantify these relationships, we follow an information theoretic approach using the concept of Transfer Entropy. Our analysis is based on a stochastic urban fractal model, which mimics urban growing settlements and migration waves. The results indicate how different policies could affect urban morphology in terms of the information generated across geographical scales.”

OpenStreetMap: London Building Coverage

osm_londondetail

OpenStreetMap is still surprisingly incomplete when it comes to showing buildings for the London area, this is a real contrast to other places (e.g. Birmingham, New York City, Paris) when it comes to completeness of buildings, this is despite some good datasets (e.g Ordnance Survey OpenMap Local) including building outlines. It’s one reason why I used Ordnance Survey rather than OpenStreetMap data for my North/South print.

The map below (click to view a larger version with readable labels and crisper detail, you may need to click it twice if your browser resizes it), and the extract above, show OpenStreetMap buildings in white, overlaid on OS OpenMap Local buildings, from the recent (March 2015) release, in red. The Greater London boundary is in blue. I’ve included the Multipolygon buildings (stored as relations in the OSM database), extracting them direct from OpenStreetMap using Overpass Turbo. The rest of the OSM buildings come via the QGIS OpenStreetMap plugin. The labels also come from OS OpenMap Local, which slightly concerningly for our National Mapping Agency, misspells Hampstead.

The spotty nature of the OSM coverage reveals individual contributions. For example, Swanley in the far south east of the map is comprehensively mapped, thanks presumably due to an enthusiastic local. West Clapham is also well mapped (it looks like a small-area bulk import here from OpenMap) but east Clapham is looking sparse. Sometimes, OpenStreetMap is better – often, the detail of the buildings that are mapped exceeds OpenMap’s. There are also a few cases where OSM correctly doesn’t map buildings which have been recently knocked down but the destruction hasn’t made it through to OpenMap yet, which typically can have a lag of a year. For example, the Heygate Estate in Elephant & Castle is now gone.

The relative lack of completeness of building data in OpenStreetMap, for London, where the project began in 2004, is – in fact – likely due to it being where the project began. London has always an active community, and it drew many of the capital’s roads and quite a few key buildings, long before most other cities were nearly as complete. As a result, when the Bing aerial imagery and official open datasets of building outlines became more recently available, mainly around 2010, there was a reluctance to use these newer tools to go over areas that had already been mapped. Bulk importing such data is a no-no if it means disturbing someone’s prior manual work, and updating and correcting an already mapped area (where the roads, at least, are drawn) is a lot less glamorous than adding in features to a blank canvas. As a result, London is only slowly gaining its buildings on OSM while other cities jumped ahead. Its size doesn’t help either – the city is a low density city and it has huge expanses of low, not particularly glamorous buildings.

An couple of OpenStreetMap indoor tracing parties might be all that’s needed to fix this and get London into shape.

osm_london_2mb

Click for a larger version. Data Copyright OpenStreetMap contributors (ODbL) and Crown Copyright and Database Right Ordnance Survey (OGL).

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Twitter’s swearing mapped: Which UK country is the most foul-mouthed on social … – The Independent


The Independent

Twitter's swearing mapped: Which UK country is the most foul-mouthed on social ...
The Independent
Researchers from the Centre for Advanced Spatial Analysis at the University College London last year revealed a particular peak around transfer-deadline day following Arsenal's signing of Danny Welbeck, a Manchester United forward, in September.

London Landmarks Jigsaw Puzzle

londonlandmarks_image

London Landmarks is a 1000-piece jigsaw puzzle of a stylised map/view of central London, drawn by Maria Rabinky and produced by Gibsons, and is possibly the most fun map to have arrived the desk of Mapping London Towers for a very long time. Not content with reviewing the box and the individual pieces, we of course had to actually complete the jigsaw puzzle itself, which was achieved by 2-3 people, working fast over 3 two hour sessions earlier this week (so that’s around 12 hours of effort!). We were hoping that our geographical knowledge of central London streets and landmarks would be enough to allow an swift completion of the puzzle – many of the streets and features are named on the map, and there are very few similar coloured pieces – even the Thames helpfully shifts from dark blue to light blue as it heads eastwards – so we hoped this would be an easy puzzle. How wrong we were, as the “birds eye” view of London, looking roughly northeastwards from somewhere above Battersea park.

The topology of the puzzle is pretty good – obviously many streets have been omitted for clarity, but buildings appear in the right location when you look at the finished puzzle, even if they don’t appear too when you are putting it together. The toughest building was the Houses of Parliament, as the project used makes it appear huge, and our final piece was the labyrinthine Royal Courts of Justice. Favourite building representations include Waterloo Station (which is represented by its famous four-sided meeting place clock) and London Zoo (which includes a veritable menagerie of animals in a single spot (note there is sadly no panda there in real life).

When building the puzzle, we did the traditional filling out of the edges, then worked up the River Thames, building the bridges across it by remembering their sequence. Different street lighting designs (a lovely bit of detail) on the bridges helped these. We then worked on the road network and filled in buildings as we spotted them. Some tube stations are included, identifiable by special blue/red text. Watch out when putting together the Millennium Wheel though – there’s two, the second one forming the compass points indicator:

londonlandmarks_puzzle

This is a really lovely jigsaw puzzle and we would love to see both more maps like this (it’s a lovely looking map, and doesn’t just focus on Zone 1, but includes the Royal Observatory Greenwich, Hampstead Heath and even Windsor Castle (in a cloud). You can buy it from Amazon for a bargain £13, the artwork itself is also available as an art print.

Thanks to Gibsons Games for sending a review copy.

londonlandmarks_box

Lectureship In Quantitative Health Geography, Uni of Canterbury, Christchurch, New Zealand

 
There is a three year fixed term Lectureship in Quantitative Health Geography available to the University of Canterbury in Christchurch, New Zealand.
 
Ideal for a newly completed PhD student or postdoc.
 
More details can be found here.
 
Thanks
 
Simon
 
Professor Simon Kingham
Professor of Geography and Director of the GeoHealth Laboratory
University of Canterbury – Te Whare Wananga O Waitaha

Kings Cross dashboard – can we bring to life the Knowledge Quarter’s … – Kings Cross Environment


Kings Cross Environment

Kings Cross dashboard – can we bring to life the Knowledge Quarter's ...
Kings Cross Environment
A couple of years ago the Centre for Advanced Spatial Analysis (CASA) at the UCL Bartlett School, based on Tottenham Court Road produced this great City Dashboard that puts flesh on the bones of all the 'smart city' and 'big data' guff. Can we do one ...

#15MinuteMap: Tropical Cyclone Tracks 1842-2014

More and more I stumble across really cool datasets online that are crying out to be mapped, but I never seem to have the time to do anything fun with them. I had a spare 15 minutes yesterday and challenged myself to map something.

I wanted to map tropical cyclones – you can see the result below. I think my map is a good start, but it was never going to be perfect in 15 minutes. For example, it would benefit from a few more tweaks and a little more time spent adding labels, picking out interesting storm tracks and so on.

That said, I surprised myself that I was able to do anything in 15 minutes – it felt a bit like the cartographic equivalent of arriving home hungry and throwing a quick meal together from what’s left in the fridge.

It would be great to see what others can do – use the hashtag #15MinuteMap to share on Twitter!

hurricanes

How I did my map:

Data formatting is time consuming so the trick is to find some nice clean data – in my case it was a shapefile of historic hurricane tracks from here. These could be loaded easily into a GIS (ArcGIS in this case). I then downloaded “black marble” from NASA (as a GeoTiff) and loaded that into the GIS too. I then reprojected both to the Winkel Tripel projection and made the storm track lines pink and a little transparent. Hey presto!

The latest outputs from researchers, alumni and friends at UCL CASA