Seeing Red: 15 Ways the Boris Bikes of London Could be Better

santabikes

I’ll be gradually tweaking this article to add/amend sources, clarifications and develop some of my arguments.

A big announcement for the “Boris Bikes” today, aka Barclays Cycle Hire. London’s bikeshare system, the second largest in the western world after Paris’s Velib and nearly five years old, will be rebranded as Santander Cycles, and the bikes with have a new, bright red branding – Santander’s corporate colour, and conveniently also London’s most famous colour. As well as the Santander logo, it looks like the “Santa Bikes” will have outlines of London’s icons – the above publicity photo showing the Tower of London and the Orbit, while another includes the Shard and Tower Bridge. A nice touch to remind people these are London’s bikes.

It’s great that London’s system can attract “big” sponsors – £7m a year with the new deal – but another document that I spotted today reveals that, despite the sponsorship, London’s system runs at a large operating loss – this is all the more puzzling because other systems can (almost) cover their operating costs – including Washington DC’s which is both similar to London’s in some ways (a good core density, same bike/dock equipment) and different (coverage into the suburbs, rider incentives); and Paris’s, which has a very different funding model, and its own set of advantages (coverage throughout the city) and disadvantages (little incentive to expand/intensify). What are they doing right that London is not?

In financial year 2013/4, London’s bikeshare had operating costs of £24.3m. Over this time period, the maximum number of bikes that were available to hire was 9471, on 26 March 2014. This data comes from TfL’s own Open Data Portal. This represents a cost of over £2500 per bike, for that year alone. If you look at it another way, each bike is typically used three times a day or ~1000 times a way, so that’s about £2.50 a journey, of which the sponsor pays about £1 and the user about £0.50. As operating costs, these don’t include the costs of buying the bikes or building the docking stations. Much of the cost is likely absorbed in repairing the bikes – London’s system is wildly successful, so each bike sees a lot of use every day, and the wear and tear is likely to be considerable. This is not helped by the manufacturers of the bikes going bust a couple of years ago – so there are no “new” ones out there to replace the older ones – New York City, which uses the same bikes, is suffering similar problems. The other big cost will be the rebalancing/redistribution activity, operating a fleet of vehicles that move bikes around.

I have no great issues with the costs of the bikes – they are a public service and the costs are likely a fraction of the costs of maintaining the other public assets of roads, buses, railway lines – but it is frustrating to see that, with the current setup of London’s system, the main beneficiaries are tourists (The Hyde Park docking stations consistently being the most popular), commuters (the docking stations around Waterloo are always popular on weekdays), and those Londoners lucky enough to live in Zone 1 and certain targeted parts of Zone 2 (south-west and east). Wouldn’t be great if all Londoners benefited from the system?

Here’s 15 ways that London’s bikeshare could be made better for Londoners (and indeed for all) – and maybe cheaper to operate too:

  1. Scrap almost all rebalancing activity. It’s very expensive (trucks, drivers, petrol), and I’m not convinced it is actually helping the system – in fact it might be making it worse. Most cycling flows in London are uni-directional – in to the centre in the morning, back out in the evening – or random (tourist activity). Both of these kinds of flows will, across a day, balance out on their own. Rebalancing disrupts these flows, removing the bikes from where they are needed later in the day (or the following morning) to address a short-term perceived imbalance that might not be real on-the-ground. Plus, when the bikes are in sitting in vans, inevitably clogged in traffic, they are of no use to anyone. Some “lightweight” rebalancing, using cycle couriers and trailer, could help with some specific small-scale “pinch points”, or responding to special events such as heavy rainfall or a sporting/music event. New York uses cyclists/trailers to help with the rebalancing.
  2. Have a “guaranteed valet” service instead, like in New York. This operates for a certain number of key docking stations at certain times of the day, and guarantees that someone can start or finish their journey there. London already has this, to a certain extent, at some stations near Waterloo, but it would be good to highlight this more and have it at other key destinations. This “static” supply/demand management would be a much better use of the time of redistribution drivers.
  3. Have “rider rewards“, like in Washington DC. Incentivise users to redistribute the bikes themselves, by allowing a free subsequent day’s credit (or free 60-minute journey extension) for journeys that start at a full docking station and end at an empty one. This would need to be designed with care to ensure “over-rebalancing”, or malicious marking of bikes as broken, was minimised. Everyone values the system in different ways, so some people benefit from a more naturally balanced system and others benefit from lower costs using it.
  4. Have more flexible user rules. Paris’s Velib has an enhanced membership “Passion” that allows free single journeys of up to 45 minutes rather than every 30 minutes. In London, you have to wait 5 minutes between hires, but most systems (Paris, Boston, New York) don’t have this “timeout” period. To stop people “guarding” bikes for additional use, an alternative could be make it a 10 minute timeout but tie it to the specific docking station (or indeed a specific bike) rather than system-wide.
  5. Adjust performance metrics. TfL (and the sponsors) measure performance of the system in certain ways, such as the time a docking station remains empty at certain times of the day. I’m not sure that these are helpful – surely the principle metric of value (along with customer service resolution) is the number of journeys per time period and/or number of distinct users per time period. If these numbers go down, over a long period, something’s wrong. The performance metrics, as they stand, maybe encouraging the unnecessary and possibly harmful rebalancing activity, increasing costs with no good gain.
  6. Remove the density rule (one docking station every ~300 metres) except in Zone 1. Having high density in the centre and low density in the suburbs works well for many systems – e.g. Bordeaux and Washington DC, because it allows the system to be accessible to a much larger population, without flooding huge areas with expensive stations/bikes. An extreme example, this docking station is several miles from its nearest neighbour, in a US city.
  7. Build a docking station outside EVERY tube station, train station and bus station inside the North/South Circular (roughly, Zones 1-3). Yes, no matter how hilly* the area is, or how little existing cycling culture it has – stop assuming how people use bikes or who uses them! Bikeshare is a “last mile” transport option and it should be thought of as part of someone’s journey across London, and as a life benefit, not as a tourist attraction. The system should also look expand into these areas iteratively rather than having a “big bang” expansion by phases. It’s crazy that most of Hackney and Islington doesn’t have the bikeshare, despite having a very high cycling population. Wouldn’t be great if people without their own bikes could be part of the “cycling cafe culture” strong in these places? For other places that have never had a cycling culture, the addition of a docking station in a prominent space might encourage some there to try cycling for the first time. (*This version of the bikes could be useful.)
  8. Annual membership (currently £90) should be split into peak and off-peak (no journey starts from 6am-10am) memberships, the former increased to £120 and the latter decreased back to £45. Unlike the buses and trains, which are always full peak and pretty busy off-peak too, there is a big peak/offpeak split in demand for the bikes. Commuters get a really good deal, as it stands. Sure, it costs more than buying a very cheap bike, but actually you aren’t buying the use of a bike – you are buying the free servicing of the bike for a year, and free distribution of “your” bike to another part of central London, if you are going out in the evening. Commuters that use the bikes day-in-day-out should pay more. Utility users who use the bike to get to the shops, are the sorts that should be targetted more, with off-peak membership
  9. A better online map of availability. The official map still doesn’t have at-a-glance availability. “Rainbow-board” type indications of availability in certain key areas of London would also be very useful. Weekday use, in particular, follows distinct and regular patterns in places.
  10. Better indication of where the nearest bikes/docks are, if you are at a full/empty docking station, i.e. a map with route indication to several docking stations nearby with availability.
  11. Better static signage of your nearest docking station. I see very few street signs pointing to the local docking station, even though they are hard-built into the ground and so generally are pretty permanent features.
  12. Move more services online, have a smaller help centre. A better view of journeys done (a personal map of journeys would be nice) and the ability to question overpayments/charges online.
  13. Encourage innovative use of the bikeshare data, via online competitions – e.g. Boston’s Hubway data visualisation competitions have had lots of great entries. These get further groups interested in the system and ways to improve it, and can produce great visuals to allow the operator/owner to demonstrate the reach and power of the system.
  14. Allow use of the system with contactless payment cards, and so integration with travelcards, daily TfL transport price caps etc. The system can’t use Oyster cards because of the need to have an ability to take a “block payment” charge for non-return of the bikes. But with contactless payment, this could be achieved. The cost of upgrading the docking points to take cards would be high, but such docking points are available and in use in many of the newer US systems that use the same technology.
  15. Requirement that all new housing developments above a certain size, in say Zone 1-3 London, including a docking station with at least one docking point per 20 residents and one new bike per 40 residents, either on their site or within 300m of their development boundary. (Update: Euan Mills mentions this is already is the case, within the current area. To clarify, I would like to see this beyond the current area, allowing an organic growth outwards and linking with the sparser tube station sites of point 7.)

London has got much right – it “went big” which is expensive but the only way to have a genuinely successful system that sees tens of thousands of journeys on most days. It also used a high-quality, rugged system that can (now) cope with the usage – again, an expensive option but absolutely necessary for it to work in the long term. It has also made much data available on the system, allowing for interesting research and increasing transparency. But it could be so much better still.

15094632681_a184a8a065_b
Washington DC’s systems – like London’s but profitable.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Exotic Adornments and Old Maps: The Art of Kristjana S Williams

ksw_preview

ksw_round

ksw_map_smallArtist Kristjana S Williams, originally from Iceland but now based in west London, specialises in collages of vividly coloured, exotic creatures. A number of her works have included adorning such animals around the edges of old maps – often artworks in their own right – creating a distinctive “frame” around the map and perhaps harking back to the days when maps were the preserves of ocean-going explorers, discovering weird and wonderful things in round-the-world voyages.

As well as global maps, Kristjana has worked with a number of London-specific old maps, including Lundanar Kort (excerpt on the right) based on Mogg’s 1806 map we featured only very recently, Round London (above), based on a 1791 map by Paterson, Markets Royale, set upon Cary’s 1824 plan of London, and a Transport for London commission which frames a pre-Beck London Underground map (shown below). Lundanar Kort has several editions of its own, with the same basemap but a differing assortment of fantastic adornments surrounding it. The “Gull Sky” edition includes profile sketches of a number of London buildings, old and new.

There’s something very compelling about the mashup of old maps and colourful animals that is hard to pin down!

ksw_tube

Kristjana is represented by Outline Artists and her work is available at Outline Editions. I first came across her work at the “Art Cartography” solo exhibition late last year at The Map House in Knightsbridge, an a rather wonderful map/art dealership which deserves a post of its own here sometime.

Downtime

Various websites I’ve built, and mentioned here on oobrien.com from time to time, are down until Monday lunchtime, due to a major power upgrade for the building that the server is in.

This affects the following websites:

  • DataShine
  • CDRC
  • Bike Share Map
  • Tube Tongues
  • OpenOrienteeringMap (extremely degraded)
  • Some other smaller visualisations

However the following are hosted on different servers and so will remain up:

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Gravity Models Circa 1846

Desart

Once in a while along comes a wonderful piece of historical research that again illustrates that in most fields, there is little new under the sun. Andrew Odlyzko’s recent paper entitled “The forgotten discovery of gravity models and the inefficiency of early railway networks” is just such a paper. In it, he shows that it was not Carey who was the first to argue that human interactions vary directly with their mass ain inversely with the distances between – Newton’s second law of motion – but a Belgian railway engineer Henri-Guillaume Desart who in 1846 (and perhaps even before) argued that rail traffic on any new line would follow such a law. He based this on the ‘big data’ of his time, namely railway timetables and he thus jointed the debate which raged for over century as to whether new rail lines which were built point to point in straights lines with no stations between would generate more traffic than would be attracted locally if stations were clustered around big cities. This is a debate that has some resonance even today with the debate in Britain about new high speed lines such as HS2 and what stations they might connect to.

Odlyzko’s paper also notes that in 1838, a British physicist John Herapath suggested that this local law of spatial interaction for rail traffic in fact followed a negative exponential law with traffic proportional to exp(-bd) where d was the distance from some source to a station. Arguably this is an earlier discovery although it was Desart who fitted his model to data coming up remarkably with an exponent on the inverse power of the distance in the gravity model of 2.25.

Elsewhere  I have recounted the tale of the how the Lyons electronic computer team much in advance of their time, cracked the shortest route problem in the early 1950s. You can see the video of this here where they took on the problem of pricing freight on the British Railways network by breaking their big data into chunks of network which they need to compute in and out of store in solving the problem. In fact somewhere in the recesses of my mind, there is also a nagging thought that someone even earlier just after Newton’s time first applied his gravity model to human interactions. I seem to remember this was at the time of the French Physiocrats when quite clearly the input output model anticipated Leontieff by more than 150 years when Quesnay devised his Tableau Economique. Old theories of social physics seem to go back to the beginnings of natural physics and although we live in time when the modern and the contemporary swamp the past, we are gradually discovering that our human wisdom in learning to apply science in human affairs goes back to the deep past.

 

 

Citizen Science 2015 – reflections

Citizen Science Association board meeting By Jennifer ShirkThe week that passed was full of citizen science – on Tuesday and Friday the citizen Science Association held its first Board meeting, and with the Citizen Science 2015 conference on Wednesday and Thursday, and to finish it all, on Friday afternoon a short meeting of a new project, Enhancing Informal Learning Through Citizen Science explored the directions that it will take.

After such an intensive week, it takes some time to digest and think through the lessons from the many conversations, presentations and insights that I’ve been exposed to. Here are my main ‘take away’ lessons. The conference itself ended by members of the Board of the Citizen Science Association (CSA) describing their ‘take away’ in short, tweeter messages. which was then followed by other people joining in such as:

In more details, my main observations are about the citizen science CSA board - by Michalis Vitosresearch and practice community, and the commitment to inclusive and ethical practice that came up in different sessions and conversations.

It might be my own enthusiasm to the subject, but as in previous meetings and conferences about citizen science, you can feel the buzz during the event, with participants sharing their knowledge with others and building new connections. While there are already familiar faces and the joy of meeting colleagues in the field of citizen science that you already know, there are also many new people who are either exploring the field of citizen science or are active in it, but new to the community of practice around citizen science. As far as I can tell, the conference was welcoming to new participants and the poster session on the first day and the breakfast on the second day provided opportunities to create new connections. It might be because people in this field are used to talk with strangers (e.g. participants in citizen science activities), but that is an aspect that the CSA need to keep in mind to ensure that it stays an open community and not closed one.

CSA breakfast Secondly, citizen science is a young, emerging field. Many of the practitioners and researchers are in early stages of their careers, and within research institutions, the funding for the researchers is through research grants (known in academia as ‘soft money‘) as opposed to budgeted and centrally funded positions. Many practitioners are working within tight and limited government budgets. This have an implications on ensuring the funding limitations don’t stop people from publishing in the new journal ‘Citizen Science: Theory and Practice‘ or if they can’t attend the conference they can find information about it in blogs, see a repository of posters that were displayed in the conference or read curated social media outputs about it. More actively, as the CSA done for this meeting, funding should be provided to allow early career researchers to attend.

Third, there is clearly a global community of researchers and practitioners committed to citizen science. Yet, the support and network that they need must be local. The point above about budget limitations reinforce the need for local networks and need for meeting opportunities that are not to expensive to attend and participate in. For me, the value of face to face meetings and discussions is unquestionable (and I would hope that future conferences will be over 3 days to provide more time), and balancing travel, accommodation and budget constraints with the creation of a community of practice is something to grapple with over the coming years. Having a global community and a local one at the same time is one of the challenges of the Citizen Science Association.

Katherine M - Ethics PanelFinally, the conference hosted plenty of conversations and discussions about the ethical and inclusive aspects of citizen science (hence my take away above). From discussions about what sort of citizenship is embedded in citizen science, to the need to think carefully on who is impacted through citizen science activities. A tension that came throughout these discussions is the value of expertise – especially scientific – within an activity where citizen scientists are treated respectfully and their knowledge and contributions appreciated. The tension is emphasised by the hierarchical nature of the academic world, with the ‘flatter’ or ‘self organising’ hierarchies that emerge in citizen science projects. I would guess that it is part of what Heidi Ballard calls ‘Questions that Won’t Go Away’ and will need to be negotiated in different projects. What is clear is that even in contributory projects, where the scientists setting the project question, the protocol, and asking participants to help in data collection of analysis, simple hierarchical thinking of the scientist as expert and the participants as ‘laity’ is going to be challenged.

If you want to see other reflections on Citizen Science 2015 conference, see the conference previews from  and Caren Cooper, and post conference reports from Monica Peters, which provides a newcomer view from a New Zealnad, while Kelsey McCutcheon provide an American oneSarah West for an experienced citizen science researcher view, and finally, from the Schoodic Institute, who are the sponsors and hosts of the CSA.


Tour Bus Maps!

tflkeybuses2

You’re in London for only 24 hours. You’ve never been here before. You want to see as many things as possible. What do you do? Hop on a tour bus!

London has several “hop-on-hop-off” tour bus companies, plying set routes along London’s attraction-packed streets. A day ticket typically allows as many trips as you need along the route. Some of these companies have produced maps showing the routes they take. Here are just three of them – we’ve included the price for a turn-up-and-buy all-day ticket.

1. Big Bus Tours – £32 (includes a free Thames cruise and walking tours). Link to map.

bigbus2

Big Bus has a 3D-effect map with the main tourist sites and some other buildings shown in miniature, parks (with trees) and a basic street network. It shows the two main tour routes, plus two link routes and the route of their Thames cruise too, along with numbered stops. To their credit, many attractions well off the tour are also included, such as the BT Tower and Lords Cricket Ground, as well as little characters shown crossing at “that” zebra crossing at Abbey Road), hospitals and, oddly, some residential construction sites. It’s a useful and attractive enough map for its purpose, spoilt only by the addition of blue circles which are meant to highlight particular attractions but end up duplicating them and somewhat cluttering the map. There are a couple of minor mistakes, e.g. Embankment is in the wrong place. But, considering, it’s really not too bad. Armed with this map and a free TfL tube map (available at any tube station), your average rushed tourist could probably get around London quite easily.

2. The Original Tour – £29 (includes a free Thames cruise and walking tours). Link to map.

originaltour

The Original London Tour map is similar to Big Bus Tours, but is presented in 2D and is much less cluttered, while also including 3D drawings of the most famous buildings, and only the largest parks (again with trees). It shows the main tour routes and three link routes and also the Thames cruise route. The map only includes main roads, rather than side streets, and doesn’t show any attractions that are not close to the route. The map does also cheat slightly with the scale, putting King’s Cross much closer to Euston than it actually is, and similarly moving Liverpool St station (and distorting the road to it) for convenience. As a general tourist map of London, it is therefore much less useful, being most useful while on the bus itself to see where it’s going next and what to look out for. Overall it is a more attractive – but less useful – map than Big Bus Tours.

Neither this map (or the last) are available only as a vector PDF, so likely to not look great if you print them out. Better to get the maps on the buses themselves! Incidentally the respective cartographers have been looking at each other’s creations – both maps have mis-capitalised the Swiss Re Tower.

3. Transport for London buses – £4.40 (no Thames cruise or walking tours though). Link to map.

Screen Shot 2015-02-18 at 13.06.02

The thrifty tourist who is prepared to interpret and understand London’s complex network of regular bus routes, can actually travel on much the same route as the special tour buses, for a fraction of the price – as long as you have a contactless card or Oyster card. TfL buses have a daily cap of £4.40. TfL usefully produce a special map of bus routes in central London, with key tourist attractions included. Unlike the other two featured above, it is not a geographical map so is less useful for walking sections. Indeed, it doesn’t actually show any road names, or indeed roads themselves. Once again, the tourist attractions are shown in 3D. The map is available as a vector PDF, so will look great when printing out.

Take your pick and enjoy your trip! But don’t forget there is a London beyond the tourist tour routes, and you won’t have experienced London until you’ve gone to at least one place further afield, e.g. the Olympic Park and Hackney Wick to the east, or Greenwich in south-east London.

DataShine and GeoJSON

A recent addition to DataShine’s functionality is the ability to drag and drop geoJSON (and KML) files onto the web map. This video tutorial shows how you can use the excellent geoJSON.io website to create a file for use on DataShine. I created the tutorial for a first year undergraduate field class (so I refer to fieldwork in it) but I thought it would be of wider use to the DataShine user community.

Citizen Science 2015 (second day)

After a very full first day, the second day opened with a breakfast that provided opportunity to meet the board of the Citizen Science Association (CSA), and to have a nice way to talk with people who got up early (starting at 7am) for another full day of citizen science. Around the breakfast tables, new connections were emerging. CSA breakfast

5A Symposium: Linking Citizen Science and Indigenous Knowledge: an avenue to sustainable development 

The session explored the use of different data collection tools to capture and share traditional knowledge. Dawn Wright, ESRI chief scientist started with Emerging Citizen Science Initiatives at Esri. Dawn started with Esri view of science – beyond fundamental science understanding, it is important to see science as protecting life, enabling stewardship and to share information about how the Earth works, how it should look (geodesign) and how we should look at the Earth. As we capture the data with various mobile devices – from mobile phones to watches and sensors we are becoming more geoaware and geoenabled. The area of geotechnologies that enable it – are apps and abilities such as storytelling are very valuable. Esri views geoliteracy as combination of understanding geography and scientific data – issues are more compelling when they are mapped and visualised. The Collector for ArcGIS provide the ability to collect data in the field, and it has been used by scouts as well as in Malawi where it is used by indigenous farmers to help in managing local agriculture. There are also abilities to collect information in the browser with ‘GeoForm’ that support such data collection. Maps were used to collect information about street light coverage and buffering the range that is covered. A third method is a StoryMaps.arcgis.com that allow to tell information with a narrative. Snap2Map is an app that allow to link data collection and put it directly to story-maps. There is also a crowdsource.storymaps.arcgis.com that allow collection of information directly from the browser.

Michalis Vitos, UCL – Sapelli, a data collection platform for non-literate, citizen-scientists in the rainforest. Michalis described the Extreme Citizen Science group – with the aim to provide tools for communities all over the world. In the Congo-basin communities face challenges from illegal logging and poaching , but forest people have direct competition for resources such as the trees that they use, and with the FLEGT obligations in the Republic of Congo, some protection is emerging. The team collaborate with a local NGOs – but still facing challenges including literacy, energy, communication etc. Sapelli collector is an application work with different levels that allow the data collection area. The Sapelli launcher locks the interface of the phone, and allow specific functions to be exposed to the user. The issue of connectivity was address in communication procedures that use SMS. The issue of providing electricity can be done in different ways – including while cooking. There is a procedure of engaging with the community – starting with Free and Prior Informed Consent, and the process start with icons, using them in printed form and make sure that the icons are understood – after the agreement on the icons, there is an introduction to the smartphones – how to touch, how to tap and the rest of the basics. The next stage is to try it is the field. Sapelli is now available in Google Play – the next stage is to ensure that we can show the participants what they collected, but we didn’t managed to find satellite images, so experimentation with drone imagery and mapping to provide the information back to the community. In terms of the results to the community, the project is moving from development to deployment with a logging company. The development of the icons is based on working with anthropologists who discuss the issues with the community and lead the development of the icons. Not all the icons work and sometime need to change. The process involved compensating the community for the time and effort that they put in.

Sam Sudar, University of Washington – Collecting data with Open-Data-Kit (ODK) - Sam gave a background on the tool – the current version and the coming ODK 2.0. ODK is information management tools for collecting and storing data and making it usable, targeted at resource-constrained environment – anywhere where there is limited connectivity, without assuming smartphone literacy. It is used all over the world. It is being used in Kenya, and by JGI in Tanzania, the Surui tribe use it in Brazil to gain carbon credit, and the Carter Center in Egypt for election monitoring, WWF in Rwanda. The technology is used in very diverse way. Need to consider how technology empowers data collection. The ODK workflow is first, build the form, the collect the data, and finally aggregate the results. ODK build / ODK XLSform is the way to build it in Excel, then there is ODK collect to render the forms, and finally ODK aggregate can run locally or on Google App Engine. There is a strong community around ODK with much support for it. In ODK 1.0 there is no data update on the mobile device, as it replicated the paper process. There is limitation for customisation of the interface, or linking to sensors. ODK 2.0 can provide better abilities and it allow syncing of information even it is done on the cloud. The ODK survey replacing ODK collect, and the ODK tables is a way to interact with data on the device. The intention is to make it possible to interact with the data in an easier way.

Do local communities worries about the data collected about them? ODK work with a lot of medical information, but the team doesn’t goes on the ground. Michalis noted that there are not only problems with external body, but also cultural sensitivity about what data should be seen by whom, and there is an effort to develop tools that are responsive to it.

Tanya Birch, Google Earth – Outreach Community-based field data collection and Google mapping tools the video include Jane Goodall work in Tanzania with Chimpanzee, due to habitat lost, there are less than 300,000 chimpanzee left in the wild. Lillian Pintea noted the importance of satellite images that demonstrate all the bare hills in the area of Tanzania. That lead to improve the life of the local villagers so they become partners in conservation. The local communities are essential – they share the status of the work with the people in the village. The forest monitor role is to work across the area, collect data and monitor it to ensure that they can collected data with ODK. Location information is easier in tablet and then upload it to Google, and then it is shared with global effort to monitor forests. Gombe is the laboratory for scaling up across the area of habitat of Chimpanzees and using Google reach to share it widely.

How you have used the tools with youth or challenges of working with young people. The engagement with youth, the term digital native is true and they end teaching the teachers on how to improve the apps. The presentations discussed the simplicity in technology so you don’t need to know what is going on in the background. Another question is people want to change the scale of analysis – standing in the point and taking a picture of a mountain, and how to address different scales. The map as part of the collection tool allow people to see it as they collect the data and for example allow them to indicate the scale of what they viewed. There is also the option in Sapelli to measure scale in football pitches, and in CyberTracker, there is an option to indicate that the information was collected in a different place to where the observer is. Data sharing is something that is important, but make sure that it can be exported in something as simple as

6E Symposium: Human-Centred Technologies for Citizen Science 

Kevin Crowston, Syracuse U., & Andrea Wiggins, U. Maryland (symposium convener): Project diversity and design implications most attention is on small projects, and by asking a wider range of projects they discover different practices, the design implication is to understand what the goal of the project, the participation activities – from science, conservation, to photography – there are different things that people are doing, observations is the most common type of contribution (see First Monday paper). Data quality come up in all the projects and there are different strategies. There are diversities of engagement – from conference to social media. Rewards for participation – some projects are not doing rewards: one volunteer appreciation, training , equipment and another one is competitive one. There are also socialisation – and even formal education. Funding – diverse, from grants, private contributions, to sponsorship and sustainability is an issue.

Mobile and Social Technologies
-Anne Bowser, U. Maryland: Gamifying phenology with Floracaching app – geocaching for plants – they are looking data on phenology and earlier version was developed for Bud Burst. Traditional volunteers focus contribution to science, while millennials might be interested in mobile app that is based on games. Embedded maps can be used to crease a cache and there is a leaderboard and points. Floracaching was created from paper prototyping and focus groups. They found perception of gamification was important to millennials, they also enjoyed competition. Also wanted to be told what to do and feedback on how they’ve done. ‘I’m not going to drive an hour to see a plant bloom’ . Missions can be added to the design and help people to learn the application and the data collection.

-Michalis Vitos, U. College London: Sapelli, a mobile data collection platform for non-literate indigenous communities, Michalis covered Sapelli, and the importance of the interface design (see previous session). The design of the icons is being discussed with, effectively, paper prototyping

-Muki Haklay, U. College London: Geographical human-computer interaction for citizen science apps (I’ll blog it later!)

-MattGermonprez, Alan Kolok, U. Nebraska Omaha, & Matt Levy, San Francisco State U.: Enacting citizen science through social media Matt come from a technology angle – he suggested that social media is providing different form of information, and social media – can it be integrated into a citizen science projects. The science project is to monitor Atrazine which started in 2012, with a litmus test, the project worked, but they wanted to use social media in the social setting that they work. Facebook wasn’t used beyond the information, but Twitter and Instagram was used to report observations publicly. The problems – no social conversations, so the next stage they want to maintain social conversation as the next goal Lil’ Miss Atrazine project.

Developing Infrastructures
-Jen Hammock, Smithsonian Institution: An infrastructure for data distribution and use, the aim of the project of looking at snails – findability problem, a tool that they want to develop is for data search – so following different sources for information, and merging the taxa, location, and providing alerts about interests. Notification will be provided to the researcher and to the contributor. There can be knowledge about the person that contribute the information. There are technical and social barriers – will researchers and experienced naturalists be interested in sharing information.

-Yurong He, U. Maryland: Improving biodiversity data sharing among diverse communities. looking at biodiversity – and the encyclopaedia of life. There are content partners who provide the data. She looked at 259 content partners and found 6 types of data providers – and they are professional organisations that operate over time such as IUCN, NHM etc. The second type is repositories, professional database emerge in the 1990s. There are citizen science intiative and communities of interest, such as Xeno-Canto for bird song. Fourth are social media platforms such as wikipedia,  Fifth are education communities who add information while they focus on education and finally subsidiaries. We need to know the practices of the provide to support sharing.

-S. Andrew Sheppard, U. Minnesota & Houston Engineering, Inc.: Facilitating scalability and standardization. Andrew talked about the wq framework. He focused on collection, storage and exchange. Standards are making possible to make projects work together, there are devices, field notes, computers, phones – but challenging to coordinate. Web browsers are based on standards are making it possible to work across platforms. Javascript is also supported across platforms. The wq.app provide the ability to collect information. The exchange require sharing data from different sources, Need to build the software to adapt to standards – wq.io is a platform to allow the creation of multiple links. Use standards, HTML5 and build adaptable tools for data exchange

-Stuart Lynn, Adler Planetarium & Zooniverse: Developing tools for the next scientific data deluge. Stuart discussed about their online community. They have 1.2 m users. The challenge in the future is that there are going to be many projects and data sources that give huge amount of data. The aim is to partner with machine learning algorithm but how to keep the crowd interested and not just give the most difficult cases with no opportunity to learn or progress slowly. Gamification can be stressful, so they try to give more information and learning. They also try to create a community and discuss the issues. There is huge distribution of comments – and deepening engagement. There is no one size fits all and we need to model and understand them better.

Contributors and Communities
-Jenny Preece, U. Maryland: Motivating and demotivating factors for long-term participation – what motivate people to come back again and again. The different motivational aspects -describing the work of the late Dana Rotman collect info in the US, India and Costa Rica. 142 surveys from the us, 156 from India and also interviews in the three countries. She used grounded theory approach. She developed a framework initial, and for long term impact there are internal and external motivation. Demotivations – time, problems with technology, long commitment with the task.

-Carsten Oesterlund, Gabriel Mugar, & Kevin Crowston, Syracuse U.: Technology features and participant motivations, the snowflakes of participants – how might we approach them? people change over time? looking at zooniverse – specifically planet hunters, there are annotations, talk and other sources of information. The talk pages – new comers and encouraged to annotate and comment about the image and also looking at what other people have done. They also find people that are more experienced. Use of talk change over time, people start putting in comments, then they go down and stop commenting and then later on started putting more information. There is also role discovery in terms of engagement and what they do in their community.

-Charlene Jennet, U. College London: Identifying and promoting creativity – creativity is a puzzling question, which is debated in psychology with some people look for breakthrough moment, while other look at everyday creativity. There are examples of projects that led to creativity – such as foldit? in terms of everyday creativity in citizen cyberscience and conducting interviews with volunteers and results include artwork from the old weather forum or the Galaxy Zoo Peas and eyewire chatbots that were created for members. People who are engaged in the project are contributing more to the project. Providing feedback on progress is important, and alos regular communication and personal feedback in blogs and answering in tweeters. Event help and also need to have ability role management
-Carl Lagoze, U. Michigan: Inferring participant expertise and data quality – focusing on eBird and there is a paper in big data and society. The standard way is to control the provenance of the data. The library is creating ‘porous zone’ so today there is less control over the who area. There are barriers that break down between novices and experts. How can we tell experts/non experts – this happen across areas, and it is sort of distributed sensor network with weak sensors. are there signal in the data that help you to identify people and the quality of their information.

7C Panel: Citizen Science and Disasters: The Case of OpenStreetMap – 

Robert Soden (University of Colorado, Boulder) described the GFDRR project of Open Cities to collect data for resilience planning and explained the reasons to select OpenStreetMap to use for it. Kathmandu is recognised as at risk place, and there was an aim to identify schools that are at risk, but there was a need to do the basic mapping. There was a local partnership with universities in the area. There was a challenge of figuring out data model – number of stories, usage, roof type, wall type, age. There was a need to make students to collect information that will help in modelling the risk. They produced a lot of training material. The project was successful in collecting the data and enriching the information. The process helped in creating an OpenStreetMap community out of it, and then they launched a local NGO (Kathmandu Living Labs). Trust in the data was important and there was a risk of discrediting the data – to deal with that, they involved targeted users early as well as spot check the data and done a fuller assessment of the data. They launching similar projects in Jamaica. Vietnam and Madagascar. They want to engage people in more than just data collection, and how they can be support to grow the community

Mikel Maron (Humanitarian OpenStreetMap Team) Mikel covered what is OpenStreetMap (OSM), the OSM foundation is a different entity than Wikimedia, which is confusing. OSM are a very wide community of many thousands of people that continue to contribute. Humanitarian OpenStreetMap Team (H.O.T) is following the ‘Cute Cat theory for humanitarian maps’ – use something that is alive and people are used to contribute to, when you need it in emergency situations. OSM is used in many organisation and projects in government. Attempts to map all these organisations is challenging. In Bangladesh, there are 6 OSM projects and require cooperation between agencies – at least all projects contribute to the same database. Organisations find it challenging that they need to support but can’t control. Starting from Gaza in 2009, OSM community started to map the area although there was no specific request. OSM was eventually used to create local tourist map. The community in Gaza didn’t continue – providing long term support is difficult.Haiti 2010 helped in producing the data and it was difficult to coordinate, so that led to the tasking manager. MapGive is providing support through imagery to the crowd – a way to support OSM by utilising the DigitalGlobe database. There are development of linking OSM and citizen science. There is very rich data in OSM and there is need to understand social science and data research.

8E Symposium: Ethical Dimensions of Citizen Science Research
Caren Cooper opened with a list of issues: participation vs exploitation; beneficence, maleficence, autonomy and justice; incentives vs manipulation; IP and data ownership; data misuse, sharing accessiblity; opennes vs privacy and secury; cultural competence. 

Holly Menninger led yourwildlife.org – the project that she focusing on – home microbiom at home. Asking dust samples from home that volunteers share and they look at the content. Volunteers want to understand their home but also the science. There was the issue of reporting back to participants – They want to understand the information, and they provided some information and it was a challenge to translate the scientific information into something useful. People are interested in the information at home, sometime due to personal issues – e.g. request to get the results because someone is ill in the house. There is a lag of 2 years between samples and results, and it need to be explained to the participants. There is also an issue that the science is exploratory, which mean that there are no specific answers that can be answered for participants.

Madhusudan Katti explored the appropriation of citizens knowledge. In the realm of IP in traditional knowledge is discussed a lot. Appropriating local knowledge and then publishing when the information came from local knowledge through interviews – but the scientists get the fame. Collecting information about engendered species where there is risk from local community. he mentioned the film Living with elephants which focus on the conflicts between humans and elephants but that also might help poachers.

Janet Stemwedel highlighted that even participant-led citizen science can be helped with DIY science. DIY science it is self efficacy, and control the process, so if the participants running the show, than what can go wrong? Who better to protect my autonomy than me? The answer that autonomy is tricky and need good information about potential risks and benefits and your current choices can hurt future prospects for choosing freely (don’t use autonomy to get addicted, or what you do with your personal information), finally our exercise of autonomy can impact others’ prospects of free choice (DNA analysis have an impact on your wider family). IRB is a mechanism to think it through – potential consequence (good and bad), who could be impacted? strategies for answering the question. Reasons to resist IRB – not legally required, and the academic scientists complain about it, as well as no access to an IRB.

The reason to get over the resistance is that unintentional harm is not a good thing, also to get feedback from more eyes helped to know about tools and approach. Ethical objectivity is to go beyond just gut feeling and discuss with other people.

Anne Bowser discussed the ethics of gamification – the use of game design elements in non-game contexts (using leader boards). Old weather had an element of games, and also the floracaching as an example. There is labour/exploitation two – in games such as Civilization II is done for fun, while you learn about history. Online games are using different approaches to extract more from their users. Does contribution to science cleanse the ethical issues because it’s not for motives? crowdsourcing was critique in different ways. There are also tracking and privacy, so it also provide habits and all sort of details about the users (e.g. in foursquare) – salesforce is getting badges to encourage people to act in specific ways as employers. Ethical citizen science: treat participants as collaborators; don’t waste volunteer time; volunteers are not computers (Prestopnik & Cowston 2012). Ethical design allow participants to be aware of the implication and decide if they want gamification or not.

Lea Shanley – covering data privacy – it came from working with Native American tribes, with participatory mapping. Tribe started to use participatory GIS. There were many things they wanted to map – and the information had difference in views about sharing the data or not. Some places were careful and some was not. In disaster response, there is all the social media curation, and many people that are open data evangelist and they started sharing location of first aiders location and actually risking them. In citizen science, there is lack of attention to location – places were they recorded, and even real time information that risk physical security of participants. Face recognition is possible. Information collected by volunteer can reveal medical information that can harm people prospects. sensitive information, sacred sites location, endangered species. Toxic environments can risk volunteers. There are also issues with who interpret and manage the data. social norms and reinforcing social norms. An emerging area is security of social media – crowdsourcing teams where hacked in DARPA red balloon challenge. There can be issues with deliberate hacking to citizen science from people who don’t like it.

Dianne Quigley – northeast ethics education partnership, that came from issues of environmental and social justice to improve ethical knowledge of researchers. When researchers start with a community they start with discussion of risk/benefits and consider who is getting something out of it. Training graduate students to know how to work with communities. avoid harming – non-maleficence; also informed consent of working with communities, protecting data; justice is a way to think of linguistic diversity, respect to local knowledge, and also recruitment in a fair way in terms of representation. Data management and protocols. There is a need to learn humility – to respect the needs and practices of the community.

There are ideas to start an ethics group in the CSA and consider code of ethics or participant bill of rights? do we need to extend IRB oversight? co-created common rule? is there a value in code of ethics or will it be a dead word? The discussion explored the need bottom up projects which also need to consider the impacts and outputs, communication with the public and promising what the research will deliver, and the investment of time in citizen science by early career researchers can also impact their career prospect. These are challenges that are common in community participatory research.

9A Panel: The brave new world of citizen science: reflecting critically on notions of citizenship in citizen science

The panel is specifically reflecting on the citizenship aspects of citizen science. Citizen science is a significant phenomena, and feeling that need a critical voice within it. What is the place of the citizen in citizen science? question about governance, methodologies practices and methodologies. How does it connect to wider democratisation of knowledge?

Eugenia Rodrigues (University of Edinburgh, UK) noted that what model of citizenship it promotes? one way is to look at the demographics, but we can ask about the term – possible to use volunteer, amateur, or extended peer review. Citizen include autonomy, creativity, liberty , responsibility, having a stake etc. What are the citizens doing and are we constructing a story that recognises the citizen scientists as a citizen? The story that is appearing in work in North-east of England dealing with water pollution in local woodland, where they noted that the EA was not doing things satisfactory way, so their need of their local habitat was overlooked. In this case  we have contextual/experiential knowledge and expert monitoring skills to lead to a change. Citizen science can be seen as counter expertise. We need to include – some classification are trying to control the role of the citizens, the need to control levels of participation to improve quality, do not give space for participants to exercise their citizenship fully.

Shannon Dosemagen (Public Lab) – in public lab there are specific attention to environmental monitoring and there is a need to re-imagine the role. In public lab they prefer to use civic science or community science and not citizen science because it can be controversial or different in different places. They also think of scientists and non-scientists not in a supplicant way. Consider how engage people in the whole process. Different roles play out in different ways – they want to be active about it. There are different roles within the community of public lab but it is about egalitarian approach to roles?

Esther Turnhout (Wageningen University) looking at expertise and quality control in citizen science networks for biodiversity knowledge. Biodiversity knowledge is existing in amateur naturalists and they started using the term citizen science. To conceptualise – there are complex relationships with mainstream science. Biodiversity recording been around for a long time and the data is increasing demand for decision making. What it brought with it is demand to professionalise and increase standards and quality. The validation is the complex networks of amateurs, experts, professionals and decision makers – looking at actors in the network. Validation is done in different places with different motivations – there are hierarchical network inside the naturalists groups and enforcing them with novices. The digitise data is compared with existing observation and there is reciprocity between observer and the process of collecting and organise the data. There are lots of things – butterflies, community of observers, the field guide – the process is circular. But increasingly, validation is imposed and procedural. Validation seizes to be collective and the records no longer circulate. The main concern is to keep check where the data go and belong to the observer. The citizenship dependent on not just turning the data into probabilities. There is a need to maintain control over the data.

Rick Hall (Ignite!, UK) there been different learned societies around the country – the learned societies that emerged from the 18th century, the acts of enclosures and the workhouses enslaved large groups in society. Today, we can ask about Internet barons if they are trying to do the same as mill owners. There is a cultural entitlement in the human right declaration. The current president of the Royal Society – finding things for yourself is at the very heart of science. It matter where it takes place – for example in a popup shop that allows community curiosity labs and explore questions that matter to them. Spaces in schools that young people can take ownership over their investigations. In spaces like Lab_13 are spaces to learn how to become a scientist. The issues are asking young people what people want to know know. We need spaces where citizens learn not just science but how to become scientists… We need more community and civic citizen scientists because the world need more curios minds.

Erinma Ochu (University of Manchester, UK)- as a neuroscientist she found her research that it requires empathy and stories as a way the science evolved as powerful and controlling. What happen when you bring science to the public realm? How to ensure that it is inclusive for women and minorities?

For me, the discussion highlighted that it was mostly about collective action (which Katrin V is in the citizen concept) and egalitarianism in the production of knowledge -so expertise without hierarchy.

another observer raised the issue of democratisation and what notion of political actions we would like to see within citizen science

The final keynote was from Amy Robinson EyeWire: Why Do Gamers Enjoy Mapping the Brain? demonstrating the game and how it works. Lessons from EyeWire – it’s been running for 2 years and a lot of things that were learned. The idea: if we build it, they will play – that’s not happen. Actually, carefully crafted, slowly built community – creating the tools, learning about how things are used. Media is crucial – 60% of eyewire registration came within 5 days of major media event. Major media event is in facebook, twitter and other social media – suddenly things are coming from media. Facebook page can convert viewers to participants. Media relations are an active engagement, not just waiting for journalist – share all sort of things, and funny things. Reaching out to media also require being prepared to it – and you need to cope with it and capture it. Create internal analytics to understand how the project works. Engagement is also a major issue – there is a huge drop off after two months. By creating games and missions can provide a reason to capture people’s interest. Prestige within the community can work to motivate them – changing the user handle colour can demonstrate the recognition by the project. There are also specific challenges and set their own challenges. Accuracy and efficiency – using the power players in the game to have a bigger role in the project. How do you recognise a potential power players in your game? Design of the entry page is critical – the page is minimalist and reduce the amount of information that you need to enter the system. They have created all sort of interesting collaboration such as fascinating visualisations. There is also need to take risks and see if they are going to work or not.

Abe Miller-Rushing close the conference asking people to share talks and links, as well as posters will come online. We are aiming to create a community and serve the needs. The new board chair, Greg Newman continue with some take aways from the conference which completed the conference. Another blog post from the conference is  https://wildlifesnpits.wordpress.com/2015/02/12/power-of-the-people-thoughts-from-the-first-citizen-science-association-conference/


Village Model

http://village.anth.wsu.edu/node/67
What now seems like a very long long time ago, when I was getting up to speed with Agent-based modeling and GIS, I came across a great edited book entitled "Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes". 

One chapter in particular that I really enjoyed because of its clarity and use of data was by Kohler et al. (2000) entitled "Be There Then: A Modeling Approach to Settlement Determinants and Spatial Efficiency Among Late Ancestral Pueblo Populations of the Mesa Verde Region, U.S. Southwest". 

The chapter explored the question of why did Pueblo people vary their living arrangements between  compact villages and dispersed hamlets between 901-1287AD? To this day, I use this chapter when I am teaching about early agent-based models. While the initial model was implemented in Swarm, it has now been ported to Repast and developed further by an NSF supported program called Village Ecodynamics Project.




Full Reference:
Kohler, T.A., Kresl, J., Van Wes, Q., Carr, E. and Wilshusen, R.H. (2000), 'Be There Then: A Modeling Approach to Settlement Determinants and Spatial Efficiency Among Late Ancestral Pueblo Populations of the Mesa Verde Region, U.S. Southwest', in Kohler, T.A. and Gumerman, G.J. (eds.), Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes, Oxford University Press, Oxford, UK, pp. 145-178.

Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe – Spatial News (press release)


Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe
Spatial News (press release)
He currently is a visiting professor at the Centre for Advanced Spatial Analysis at University College London. Crompvoets is an associate professor at KU Leuven Public Governance Institute in Belgium and secretary-general of EuroSDR, a European spatial ...

Citizen Science 2015 (first day)

San Jose is the location for the first citizen science association meeting, on the 11th and 12th February. The level of enthusiasm to citizen science by researchers and practitioners was palpable, even before the conference with an overwhelming number of submissions and abstract. In the end, the conference run with 7 parallel sessions, and many posters as a way to allow as many participants to present their work. Below are my notes from the day (I’ll improve this post and add links later).

Rick Bonney (who was elected as the treasurer of the association) started the conference with emphasising the reasons for having the Citizen Science Association (CSA): learning from others, developing synergies between projects and sharing information collaboratively, so we can start to solve wicked problems the society is facing by finding the answers that we need. He noted that the CSA got 3000 members, a new journal Citizen Science: Theory and Practice and the conference have almost 650 participants.

Following Rick, Lila Higgins and Alison Young opened the conference, with Lila promoting the use of the hashtag #WhyICitSci on twitter, to provide a range of reasons why people work in citizen science. Few of those are:

Mine was :

The opening talk ‘A Place in the World – Science, Society, and Reframing the Questions We Ask‘ was given by Chris Filardi of the Center for Biodiversity and Conservation, American Museum of Natural History. Filardi, an evolutionary biologist by training, has been recognising the wide participation in science in understanding our place in the world. He started his career by going to New Guinea to study birds in remote places. Though the interaction with villagers in the highlands of New Guinea he learned that science dependent on a sense of purpose and understanding the set of relationship that people have with their natural environment. Over the years, he started realising that many of the datasets that he was using is coming from citizen science. He admits that he is new to the field, but what he noticed the lively discussion around the definition. Once he read Alan Irwin’s citizen science he found the explanation that he liked, which is seeing citizen science as that point where analysis and intervention meet each others – and it was huge personal discovery. As scientists, there is an amazing wealth of knowledge that is born from local systems – social systems that deal with the local environment. But scientists are preaching from their own pedestal . If we engage people in the full life cycle of science, we can get a more meaningful relationship between science and society. There is so much as citizen science and society that were revelationary to him. Citizen Science helps in exposing obvious things. It helps to reframing the questions that we ask – as examples from working with indigenous people on their relationship with the forest, which also linked to the preservation of grizzly bears demonstrate. Citizen Science is a touchstone in linking science and other perspectives – when they are involved in the full life cycle of science, allow to notice pre-existing values and practices that can get to the right results when it’s the community that helps to lead to the wished-for results. Conservation and evidence can help in providing community evidence that will allow them to improve the protection of their environment. By working in participatory way, we can get better results. Citizen Science also reveals risks worth taking – scientists are scared of bringing people who are not trained scientists into scientific projects. Engaging wider audience in data collection can lead to risks – for example respecting areas that are taboo for local people, which sometimes are protected through such mechanism, and not insisting on exploring them. Need to consider the costs of insisting on science – carrying out a survey in an area that the community decided not to go to because of their belief. Science in the name of evidence can harm relationships – we need to know when science need to step back. Citizen Science can link talk, action and symbol – so the social discourse, the actions and beliefs and help in dealing with some of the challenges that we have as a society. Sometimes there are risks for the scientists itself – stating that there was a compromise the scientific process because of local beliefs can be problematic amongst scientists. The discussion that followed the talk also led to mentioning participatory action research and participatory science as names for the topic.

the second session that I attended was 1D Re-Imagining Citizen Science for Knowledge Justice — A Dialogue with Tom Wakeford, Alan Irwin, Erinma Ochu,  Michel Pimbert, and Cindy Regalado. The session was organised as a dialogue in groups of 8, with several groups in the room and linking. The session include a facilitated discussion of the Questions That Won’t Go Away (QTWGA) which Heidi Ballard recently identified – and how to either solve them or learn to live with them. The group that I found myself impromptu facilitating raise the issue of data handling with issues of quality, sharing, ownership and ethics as a major issue in citizen science. Other groups   challenge the term ‘citizen’ both from scientists and participants perspectives . Respect to people money and thinking about compensations – are we treating participants properly and going beyond free labour. Another issue is how expertise are defined – different types of expertise from scientists and participants, and how they can be negotiated. For example, when research is done with communities, it is important not to further stigmatise communities and also to feel obliged to provide information back. A second round explored the vision for the future citizen science. In my group concepts of place-based citizen science or in the medical field, a disease-based citizen science were proposed – a lot of attention on community focused and based research with issues that are set by the community. There was also wish for truly collaborative citizen science with decision makers, industry and scientists. An idea that was raised is to educate natural scientists to do citizen science as part of their training  and also seamless collaborations with the data being properly used and citizen science that is funded for long terms.

The next session that I attended was 2G Talks: Tackling Grand Challenges and Everyday Problems with Citizen Science. The first talk was by Gianfranco Gliozzo from ExCiteS , titled Using Citizen Science to evaluate the cultural value of biodiversity (co-authored with Elizabeth Boakes, David Roy, Muki Haklay, and Chloe Smith). Gianfranco described a project that is funded by UCL grand Challenges of Sustainable Cities programme. The study looked at cultural ecosystem services – the inspiration from nature that people receive and influence their wellbeing. The approach is especially focused on the things that can be learned from citizen science data about the cultural services from the environment? Specifically looking at data in greater London, with almost 50% vegetated space. The data is from iSpot, iRecord and GiGL. So far, they found that there is emphasis on birds and flowering plants. The study also looked at spatial patterns. The conclusion is to integrate data, but also to appreciate the diversity of data sources and their contribution to the total information.

Karen James discussed Combining Citizen Science and DNA-Assisted Species Identification to Enable “A New Kind of Ecology”. Karen opened by explaining that there is a challenge to identify the taxonomic classification of a species during citizen science activities. There are tools such as leafsnap or wildlife acoustics that help in the process, but the issue is very challenging. She specifically focused on DNA barcoding which allow to extend expertise – the barcode of life is a website that is dedicated to this. DNA identification provide further validation. There is an effort to create a library of DNA sequences of species. She demonstrated the potential of identification by using DNA of invasive species. She also see potential in engaging DIY bio enthusiasts in doing this work.

John Tweddle talk Beyond Transcription: Realising the Research Potential of Museum Specimens Through Citizen Science (co-authored with Mark Spencer, Lucy Robinson)  discussed work at the Natural History Museum (NHM). NHM have extensive engagement in citizen science, from molecular biology to field work. John focused on unlocking the collection of the museum – they have 3 billion specimens and metadata for it. It’s a treasure trove of information – and most of it locked away. There is an easy way to take pictures of specimens, but the metadata is hand written, so there is an interest in crowdsourcing of preparing the data for further analysis. John suggested to move forward – to engage participants in measurements, and further information from the digitized specimens. There is also a potential to add place-based knowledge to enhance the information in the collection, and then design new projects and enhanced them. The Robert Pocock Herbarium project started by amateur historians, but once they came to the museum, they done a community led project for back tracking where the specimens were collected, and add contextual information.Beyond transcription

Alison Young talk Acting Locally and Thinking Globally: Building Regional Community around Citizen Science to Broaden Impacts and to Create a Scalable Model (with Rebecca Johnson) covered the work of the California Academy of Science is focus on biodiversity and is focused on research, with 45 million specimens. CAS considered how they can engage citizen scientists in the same way that the researchers have – they aim that their citizen science will be used for both research and for managing biodiversity. They started with Mount Tamalpais, with an aim of creating a benchmark and record the biodiversity in many ways, including adding specimens to the herbarium. They are defining their community as the citizen scientists, those that might want to use the data (scientists and government), practitioners and organisations and groups that are doing related work. They are relying on iNaturalist as part of their engagement plan and consider also grass-root bioblitz that people can do it more easily then full ones. The ability of people to come and document species with iNaturalist in something close by is valuable, and people engage in a short exercise of just few hours.  Nerds for nature are helping in establishing rapid bioblitzes.

As part of session 3A Speed Talks – Across Conference Themes, Simon Lambert (Lincoln University) covered Indigenous Peoples as Citizen Scientists. Simon Lambert from New Zealand talked about Mauri in joint projects. There is lack of first nation people in many conferences. He talks about the people from which he came, he worked with Mauri for some years. There is a history of science in which people where treated as specimens and working with them requires to recognise this history and view of science. There are issues at the global level that recognised indigenous people CBD, TRIPs and UNDRIP so asserting themselves is challenging for indigenous group and sovereignty – saying no to any science project. Good science comes from great politics – inclusive, ethical, acknowledging First Citizens as First Scientists – pushing into the social sciences to affect change.

In session 3G Tackling Grand Challenges and Everyday Problems with Citizen Science Christian Adams covered Google Tools in From the Ground to the Cloud: Groundtruthing Environmental Change (co-authored with Tanya Birch, and Yaw Anokwa). Christian focused on the technology. Data collection in the field – paper got both upsides and downsides. Technology provides a lot of the issues with paper – ODK provides the ability to collect data in the field. It is an open source project. ODK got forms, then collecting and then managing and analysing it with Google Map Engine and Google Earth Engine. ODK got a tool to build forms and it also got a sensors framework. ODK aggregate allow to share data in a spreadsheet or in fusion tables which then can be visualised on maps.

The final talk in the session was Public Lab: Open and cooperative structures for community-based environmental health monitoring by Shannon Dosemagen. Shannon covered the work of the Public Laboratory of Open Technology and Science, she looked at the process that they established to work with community. Shannon described the new tools that were emerging at the time of the BP oil spill in 2010. The analysis of barriers for community based environment science and health – the tools that are used are expensive, they are aimed at expert users to be able to interpret the results. Public Lab try to engage people in the full data life cycle and provide everything that is needed from development of the tools to the use of the results. At the heart of the activities are the social activities. The combination is low-cost hardware + collaborative web software + visual data that can help people to understand them + public archive so everything is accessible + you. They created open space that allow people to share experience. She demonstrated the web tools – first, a collaborative writing efforts and individual research notes that are tagged to bigger bits of information. They encourage people and recognise the contributions that people made. They maintain many emails lists that are localised and place based. They got 65 organisers that integrated the tools of public lab in their area. There is also places – to highlight the local connections. They treat participants as researchers, they also build openness into the process – so the physical link between the balloon and the operator is allowing for social interactions. The barn raising is also valuable about people coming together and they also value the link of mainstreaming true accessibility. By linking stuff on google it makes it accessible. The also protect openness with viral licensing so share a like licences are central. They also allow local version of hardware and tools. Third of the organisers are associated with higher education institutions. calibration is another issue that public lab are working with research institutions to test the tools.

The final Symposium of the day was 4A DIY Aerial Photography: Civic Science and Small Data for Public Participation and Action, with Shannon Dosemagen, with cases bringing stories of engagement and change ranging from the Los Angeles River (Lila Higgins) and Gulf of Mexico (Scott Eustis), to Uganda (Maria del C Lamadrid) and Palestine-Israel (Hagit Keysar)

The symposium or panel covered the technical aspects of DIY aerial photography. Public lab aim to create DIY, low cost (below <$150) tools that can be used by different communities. The basic components are also provided as set of tools that can be build easily, and tutorial, guides, hand drawn instruction. People can take any design and change it, but ask to share it back and continue to develop it. People use aerial mapping across the world – MapMill is an image sorting site, which is based on good/not image and Map Knitter that allows the integration of images into it. There is attribution to the people who collected, classified and stitched the map. Finally, they use print publications to share the data with other people. Scott Eustis  The aim is to understand where people want to manage their wetlands – it allow a way to link the people who know the land deeply. 3h field trip and 2h to turn it into useful map. Going out after rain events to record the location of water helps to communicate the authorities about which places are flooded and causing problem. You can have ‘eye ball’ statistics but most of the cases only little information is needed apart from the image. With near infra-red there is also ability to see information about growth of plant and their health.

Lila Higgins describe her additional interest in the Los Angeles river – a lot of people don’t know that there is a river there. The balloon mapping was aimed to increase recognition. The river became invisible – in the early 1900 during storm event it was causing collapse of houses with major property loss. To deal with that, they crease a concrete channel in 1938 – the river was concritsie in the 1960s. Most people see the concrete view, but there are people who saw different option for the river. Activists navigate the river with kayaks through the 51 miles and then EPA declared it as under the clean water act. The aim is to make it shared space. They started to use Google Earth to pick a section of the river to carry out a blue balloon over the river , and started to use the mapping to build a community. They are documenting events that are promoting the use of the river.

Maria del Carmen Lamadrid, looked at eviction mapping in Uganda. The balloon mapping was aimed to evict people who use this land in Feb 2013. Some people where using the area for over 20 years. The market was mapping in November 2012, when the police came to evict the residents, the evidence was used to prove that the place was used in a valuable way. The project asked question about self-representation and how people use tools that can allow the people in the area to control the data collection. Aerial mapping was combined with stills from the ground to tell stories about the area. Land issues are complex in Uganda with tension between different tenure structures, because some official data and tools such as google maps didn’t show the level of use of the market, the balloon mapping provide their rights. In the end, the place was evicted, and they created a map that show the eviction. Although the project was successful, it helped in terms of empowerment and gaining control over the process. They were treated as equal during the process.

Hagit Keysar looked at two use cases in east Jerusalem as an activists and researcher, looking at recording and document human right abuses, so accountability by the communities itself. Of the 60% of Jerusalem population live in east Jerusalem, and of them, 40% are Jewish and 60% Palestinians. There are regular surveillance balloons by the authorities in the area on a regular basis – what’s the role of DIY aerial photography in this context? The Silwan village outside the old city is contested, and a map was created with information activists in the neighbourhood – they wanted to free themselves from dependency on human right organisations or the UN which don’t provide suitable information. By annotating the information by personal stories. The details of the imagery – allowing to have satellites above their own neighbourhood – providing information that Palestinian cannot access due to local restrictions. The second case is in Beit Safafa and the impact of a 6 lane motorway that cut through the neighbourhood, a discriminatory urban planning practice. The community activists use the aerial photograph to explain the issues when he presented the information in different places. The photograph is not a map – it’s a testimony, something real that make a difference. The maps that were provided were not making the damage to the community legible, so it provides a testimony to the damage – for the person who look at the image, they can understand what the abuses are.

The discussion also highlighted the integration of objective information from the image, but combining that with narrative and stories of the community. How was the journey to empowerment – a joint journey to understand how the technology works. The use of this toolkit make people interested and shared imagination between the person who promotes it and those who are involved. Fly above your local environment is powerful. There is also potential of documenting important temporal moment (e.g. many birds, or flooding)

The poster session was a unique opportunity to finally meet Louis Liebenberg and to hear about CyberTracker from the person who led it since 1996.


Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe – GISuser.com (press release)


Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe
GISuser.com (press release)
He currently is a visiting professor at the Centre for Advanced Spatial Analysis at University College London. Crompvoets is an associate professor at KU Leuven Public Governance Institute in Belgium and secretary-general of EuroSDR, a European spatial ...

and more »

Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe – DirectionsMag.com


Book Provides Progress Report on Creating Spatial Data Infrastructures in Europe
DirectionsMag.com
He currently is a visiting professor at the Centre for Advanced Spatial Analysis at University College London. Crompvoets is an associate professor at KU Leuven Public Governance Institute in Belgium and secretary-general of EuroSDR, a European spatial ...

You’ve Got Mail… like heaps of it. – Australian Women’s Weekly


Australian Women's Weekly

You've Got Mail... like heaps of it.
Australian Women's Weekly
In her book, The Mathematics of Love, Fry – a mathematician and complexity scientist from University College London's Centre for Advanced Spatial Analysis – has applied mathematics to explore themes like stages of romantic journey, chatting people up ...

and more »

Geosimulation and Big Data: A Marriage made in Heaven or Hell? Schedule

Do you like big data and geosimulation and wondering when to book flights or which sessions to attend at the forthcoming AAG Annual Meeting,  If so, you might like our sessions entitled "Geosimulation and Big Data: A Marriage made in Heaven or Hell? " taking place on Wednesday the 22nd of April 2015.

Abstract of the Sessions:

In recent years, human emotions, intentions, moods and behaviors have been digitised to an extent previously unimagined in the social sciences. This has been in the main due to the rise of a vast array of new data, termed 'Big Data'.  These new forms of data have the potential to reshape the future directions of social science research, in particular the methods that scientists use to model and simulate spatially explicit social systems. Given the novelty of this potential "revolution" and the surprising lack of reliable behavioral insight to arise from Big Data research, it is an opportune time to assess the progress that has been made and consider the future directions of socio-spatial modelling in a world that is becoming increasingly well described by Big Data sources.

In these sessions we will have methodological, theoretical and empirical papers that that engage with any aspect of geospatial modelling and the use of Big Data. We are particularly interested in the ways that insight into individual or group behavior can be elucidated from new data sources - including social media contributions, volunteered geographical information, mobile telephone transactions, individually-sensed data, crowd-sourced information, etc. -  and used to improve models or simulations.  Topics include, but are not limited to:
  • Using Big Data to inform individual-based models of geographical systems;
  • Translating Big Data into agent rules;
  • Elucidating behavioral information from diverse data;
  • Improving simulated agent behavior;
  • Validating agent-based models (ABM) with Big Data;
  • Ethics of data collected en masse and their use in simulation.
2192 Geosimulation and Big Data: A Marriage made in Heaven or Hell? (1)

Wednesday, 4/22/2015.
8:00 AM - 9:40 AM.
600a Classroom, University of Chicago Gleacher Center, 6th Floor.

Chair: Nick Malleson 

Abstracts:

*Atsushi Nara:
A GPGPU approach for simulating and analyzing human dynamics
*Kira KowalskaJohn Shawe-Taylor and Paul Longley:
 Data-driven modelling of police patrol activity 
*Martin Zaltz Austwick, Gustavo Romanillos Arroyo and Borka Moya-Gomez:
Simulating Rush Hour Bicycle Traffic in Madrid 
*Hai Lan  and Paul Torrens:
Voxel based Cellular Automata with massive cells for Geo-simulation: Ice dynamics simulation in Antarctic locations as example
*Philippe J. Giabbanelli, Thomas Burgoine, Pablo Monsivais and James Woodcock:
Using big data to develop individual-centric models of food behaviours

2292 Geosimulation and Big Data: A Marriage made in Heaven or Hell? (2) 

Wednesday, 4/22/2015.
10:00 AM - 11:40 AM.
600a Classroom, University of Chicago Gleacher Center, 6th Floor.

Chair: Alison Heppenstall

Abstracts:

*Kostas Cheliotis:
Coupling Public Space Simulations with Real-Time Data Streams 
*Andrew Crooks and Sarah Wise:
Leveraging Crowdsourced data for Agent-based modeling: Opportunities, Examples and Challenges 
*Ed Manley, Chen Zhong and Michael Batty:
Towards Real-Time Simulation of Transportation Disruption - Building Agent Populations from Big Mobility Data 
*Alison Heppenstall, *Nick Malleson and Andrew Evans:
Evaluating Big Data demographics for population modelling 
Muhammad Adnan, Alistair Leak and *Paul Longley:
Exploring the geo-temporal patterns of Twitter messages

2492 Geosimulation and Big Data: A Marriage made in Heaven or Hell? (3) Discussion Session

Wednesday, 4/22/2015.
1:20 PM - 3:00 PM.
600a Classroom, University of Chicago Gleacher Center, 6th Floor.

Chair: Nick Malleson

Abstracts:
 
*Paul M Torrens and Hai Lan:
Micro big data and geosimulation 
*Mark Birkin:
The Ten Commandments of Big Data 
 2:00 PM to 3:00PM: Discussion

 Organizers

  • Alison Heppenstall, School of Geography, University of Leeds
  • Nick Malleson, School of Geography, University of Leeds
  • Andrew Crooks, Department of Computational Social Science, George Mason University
  • Paul Torrens, Department of Geographical Sciences, University of Maryland
  • Ed Manley, Centre for Advanced Spatial Analysis, University College London

Two Lecturer / Senior Lecturer Posts in Social Statistics, School of Social Sciences, University of Manchester

Closing date : 26/02/2015
Reference : HUM-05935
Faculty / Organisational unit : Humanities
School / Directorate : School of Social Sciences
Division : Social Statistics
Employment type : Permanent
Location : Oxford Road, Manchester
Salary : £34,233 to £47,328 per annum (for Lecturer) or £48,743 to £58,172 per annum (for Senior Lecturer)
Hours per week : Full time

Details and an online application form at: https://www.jobs.manchester.ac.uk/displayjob.aspx?jobid=9229

Applications are invited for two Lectureship/Senior Lectureship positions in Social Statistics tenable from 1st August 2015. The appointed candidates will join the Social Statistics Discipline Area and will provide academic leadership within the associated research institute:

The Cathie Marsh Institute for Social Research (CMIST), with respect to both research and the design and implementation of teaching programmes at undergraduate and postgraduate levels.

Applicants must have established a reputation in Social Statistics, supported by a strong record of published research output and a wider record of achievement with publications in statistics or methodological journals.

Applicants with background in Quantitative Social Science are invited to apply but specializations in quantitative demographic methods, survey and census methodology, small area estimation, multilevel/hierarchical models, development and application of computational statistical methods, longitudinal data analysis, missing data problems and social network analysis, are particularly welcome.

Salary will be within the range from £34,233 – £47,328 (for appointment at lecturer level) and £48,743 – £58,172 (for appointment at senior lecturer level) per annum (According to experience).

Informal inquiries may be made to Professor Natalie Shlomo. Email: Natalie.shlomo@manchester.ac.uk

For information about Social Statistics, see http://www.socialsciences.manchester.ac.uk/subjects/social-statistics/

Applications should be made online. If you are unable to apply on line please request an application form by emailing hrrecruitment@manchester.ac.uk quoting the reference number or by calling 0161 275 8838 (HR team recruitment line number).

Deadline for Abstracts: International Conference on Population Geographies 2015

The deadline for receipt of abstracts for the 8th International Conference on Population Geographies is fast approaching.

We look forward to receiving your proposal by Monday 16th February 2015 and welcoming you to the University of Queensland, Brisbane, Australia, from 30 June to 3 July 2015.

We invite papers from all fields of population geography and allied disciplines, especially contributions around the following themes:

Spatial demography
Migration and development
Ethnicity and segregation
Migration and the environment
Households and housing
Demography of the life course
Fertility and the family
Towards the end: death and dying
Ageing and morbidity
Indigenous populations
Official statistics
Exploiting big data
Data visualisation and communication
Demographic projections
Applications of demography
Population health

Abstracts for papers and posters should be around 250 words and include the title, authors, affiliations, and contact email, and be sent to icpg2015@uq.edu.au. For all other aspects of the conference, contact icpg2015@absoluteevents.com.au .

Key dates

Monday 16 February 2015 – Deadline for submitting abstracts.
Monday 9 March 2015 – Notification of acceptance.
Monday 16 March – Registration opens.
Monday 4 May – Deadline for Early bird Registration.

Other essential details of the conference including registration fees, accommodation, and travel are available on the Conference website at: http://www.icpg2015.org.

We are unable to assist with transport or accommodation costs for the conference but we will be offering a number of registrations at reduced cost for participants from developing countries who can demonstrate financial need.

We hope to welcome you to Brisbane in June next year, but if you prefer not to receive further correspondence about the Conference, please simply reply with Unsubscribe in the subject header.

Yours Sincerely,

Dr Elin Charles-Edwards and Professor Martin Bell

On behalf of the ICPG 2015 Organising Committee

nQuire-it/Sense-it – discovering sensors in your phone

Sense-it Light sensor Sense-it Sound
The Open University, with support from Nominet Trust and UTC Sheffield have launched  the nQuire-it.org website, which seem to have a great potential for running citizen science activities. The nQuire platform allows participants to create science inquiry ‘missions’. It is accompanied by an Android app called Sense-it that exposed all the sensors that are integrated in a smartphone and let you see what they are doing and the values that they are showing.

The process of setting up a project on the nQuire-it site is fairly quick and you can figure it out in few clicks. Then, joining the project that you’ve created on the phone is also fairly simple, and the integration with Google, Facebook and Twitter accounts mean that linking the profiles is quick. Then you can get few friends to start using it, and the Sense-it app let you collect the data and then share it with other participants in the project on the nQuire website. Then participants can comment on the data, ask questions about how it was produced and up or down vote it. All these make nQuire a very suitable place for experimentation with sensors in smartphones and prototyping citizen science activities. It also provides an option for recording geographic location, and it good to see that it’s disabled by default, so the project designer need to actively switch it on.


London in Miniature: Mogg’s 1806 Pocket Map

mogg_pocketmap

From Geographicus, a US map dealer, by way of a tweet by Rentonomy, an article in CityMetric and a collaboration with Wikimedia, comes this high-resolution scan of a beautiful old pocket map of London. Over 200 years old, and drawn by Edward Mogg, it is a snapshot of London from before the days of Zone 2, railways or skyscrapers. Within the built-up area, surprisingly little has changed – many familiar streets and landmarks have their shape and name retained.

mogg_kilburnThe most striking part of the map, therefore, are the largely empty surroundings that are now almost entirely built on. Gower Street, subsequently home to what is now University College London, ends about halfway along its current length, with the road network drawn in outline and no houses shown. Camden Town and Somers Town stand isolated from the metropolis. St John’s Wood is a farm. Just to the north of it is depicted the “British Circus”, a proposed (but never built) development based around a 42 acre “pleasure garden” See below for an excerpt of the map for this area. It reminds me of the Inner Circle of the later built Regent’s Park. Nearby, Primrose Hill is shown with hachures, an attractive technique used to indicate hill slopes, which were popular before contours were invented.

I particularly like the careful but effective use of bright colours, to highlight the Thames, parks and major streets. Black borders create an attractive “shadow” around building blocks, giving the map a slight 3D effect. The wide canvas creases show that this map was designed to folded up and carried around by the early visitor or businessman. The exact date of publication – 1 May 1806 – is carefully inscribed on the map. The Geographicus page mentions that this was just the first edition of a series of maps by Mogg.

The full scan is available on Wikimedia and it is just one of a large collection of map scans donated by Geographicus, including several more historic ones of London.

The map and this scan of it are in the public domain.

mogg_britishcircus

GeoComputation: A Practical Primer

geocomputationGeoComputation: A Practical Primer, edited by Profs Chris Brunsdon and Alex Singleton, has just been published by SAGE.

The book acts both as a reference guide to the field and as a guide to help you get to know aspects of it. Each chapter includes a worked example with step-by-step instructions.

Each chapter has a different author, and includes topics such as spatial data visualisation with R, agent-based modelling, kernel density estimation, spatial interaction models and the Python Spatial Analysis library, PySAL. With 18 chapters, the book runs to over 300 pages and so has the appropriate depth to cover a diverse, active and fast-evolving field.

I wrote chapter in the book, on open source GIS. I focused particularly on QGIS, as well as mentioning PostGIS, Leaflet, OpenLayers (2) and other parts of the modern open source “geostack”. My example describes how to build a map, in QGIS, of London’s railway “not-spots” – places which are further than a mile from a railway station, using open data map files, mainly from the Ordnance Survey. With the guide, you can create a map like the one below:

offthetracks

That little spot on its own in central-ish London, by the way, is part of Burgess Park, near Peckham.

The book has only just been published and I was able to slip in brand new screenshots (and slightly updated instructions) just before publication, as QGIS 2.6 came out late last year. So, the book is right up to date, and as such now is a great time to get your copy!

It’s available now in paperback on Amazon: Geocomputation: A Practical Primer.

geocomp_ch17

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Agent-based models in a web browser

Sharing agent-based models over the web is never very easy. You can do it with NetLogo but it requires that your web browser supports  Java 5 (but this is not recommended). One could create a jar file for your model if you are using MASON, for example. But this still requires a number of steps before you see the model running. One way to bypass this is to build the model directly into the web page. While we have highlighted the use of JavaScript for Agent-based modeling in a previous post. Ernesto Carrella, a PhD  candidate from the Department of Computational Social Science has just created some proof of concept agent-based models exploring fishing using Dart which are overlaid on top of Google maps. If you want to find out more, checkout his models over on GitHub.


London Boroughs and Tube Lines

piccadilly

How many of London’s 32 boroughs (& the City of London) would you pass through on a single end-to-end journey on the tube?

It turns out that if you travel the length of the Piccadilly Line (Uxbridge branch), then, in a single journey, you’ll pass through 14 boroughs and stop at all of them but Enfield. That’s more of London than if you travel on any single Crossrail journey, once it opens in 2018.

Line Branch # Boroughs
with Stops
# Boroughs
(Total)
Piccadilly to Uxbridge 13 14
Crossrail to Shenfield 10 13
Central to West Ruislip 11 12
Piccadilly to Heathrow 11 12
Central to Ealing Broadway 10 11
Northern High Barnet to Morden 10 10
District Upminster to Richmond/Ealing Broadway 10 10
Overground Richmond to Stratford 8 10
District Wimbledon to Barking 9 9
Hammersmith & City 9 9
Jubilee 9 9
Northern Edgware to Morden via Bank 9 9
Overground Clapham Junction to Stratford 8 9
Northern Edgware to Morden via Charing Cross 8 8
Bakerloo 5 8
Overground West Croydon to Highbury & Islington 7 7
Metropolitan 7 7
Circle 7 7
Victoria 6 7
Overground Clapham Junction to Highbury & Islington 6 7
Overground Gospel Oak to Barking 6 6
Overground Watford Junction to Euston 3 6
District Wimbledon to Edgware Road 5 5
DLR Bank to Lewisham/Woolwich Arsenal 4 4
Tramlink Wimbledon to New Addington 3 3
Waterloo & City 2 3
Cable Car 2 2

Of course, if you are aiming to see a cross-section of London’s boroughs, in a rush, then the tube probably isn’t the best way, as you’ll be underground for quite a lot of the journey…

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Geocomputation A Primer

geocomp-primer

A very nicely produced review of geocomputation in this edited book by Chris Brunsdon and Alex Singleton. It covers many interesting new techniques from agent based models to new visual statistics, from crowdsourcing methods to the newer scripting languages that are making their appearance as central to the development of contemporary spatial analysis. What is noteworthy about the book is the beautiful presentation and the visual ease in which the reader is exposed to these somewhat arcane arts of making sense of space and geography. There is a nice web site with some content that the reader can download here, and at the risk of infringing my own copyright, I will share my own chapter with you which you can download here too. Its not in the glorious presentation of the published book, merely a PDF or the word file but the figs are in colour

Quick-and-Dirty WordPress Site Cloning

mysqlcloning

Here is a guide to clone a WordPress(.org) blog, on the same server, 10 steps, on Linux, You’ll definitely need admin access to the blog itself, and probably to the database and server too, depending on your setup. I did this recently as I needed a copy of an existing production site, to hack on. If you don’t fancy doing it the quick-and-dirty way, there are, I’m sure, even quicker (and cleaner) ways, by installing plugins.

In the following instructions, substitute X and Y for your existing and new blog, respectively.

0. Do a backup of your current website, like you do normally for an upgrade or archiving, in case anything goes wrong. e.g. under Tools > Export in the WordPress admin interface.

1. Copy all the files:
cp -r /home/~username/www/blog_X /home/~username/www/blog_Y

2. Edit wp-config.php in your new blog directory:

Change:
$table_prefix = 'wp_X_';
to:
$table_prefix = 'wp_Y_';

3. Copy all the database tables (prefixed with wp_X_). The new ones should have a prefix wp_Y_ instead. I used the Copy functionality under the Operations tab in phpMyAdmin (see screenshot below).

4. Edit wp_Y_options:
update wp_Y_options set option_name = 'wp_Y_user_role' where option_name = ' wp_X_user_role';

5. Edit wp_Y_options:
Edit the option_value for rows with option_name values of siteurl and home, pointing them to the new location – mine are the same but one might be different, e.g. if you have your WordPress core files in a subdirectory relative to the directory for the site entry-point on the web.

update wp_Y_options set option_value = 'http://your_server.com/~username/wp_Y' where option_name = 'siteurl';
update wp_Y_options set option_value = 'http://your_server.com/~username/wp_Y' where option_name = 'home';

There may be other rows referencing your old blog name, but these are probably from plugins and therefore probably don’t need to be changed.

6. Edit wp_Y_usermeta:
update wp_Y_usermeta set meta_key = replace(meta_key, 'wp_X', 'wp_Y');

(You can edit the affected rows manually, but I had a lot to do – there’s around 5 for each user.)

7. Drop force-upgrade.php in the same directory as wp-config.php and run it from your browser. This rebuilds caches/hashes stored in some of the tables. You can run it repeatedly if necessary, (e.g. if you missed a step above), it shouldn’t do any harm.

You can find force-upgrade.php here.

8. Delete force-upgrade.php. Leaving it is a security risk.

9. Log in to your blog in the new location, as normal. Usernames and passwords should be preserved.

mysqlcopy

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Lego X Combines Augmented Reality, 3D modelling and 3D printing



Lego has been in the top preferences for architectural toys since forever. And who doesn't enjoy playing with the super colorful Lego pieces. Even in the Richard Rogers exhibition in London in 2013, there one full section was dedicated to the famous bricks, filled with hundreds of lego pieces lying around, to engage people in the architectural thinking of creative modelling. Gravity, a company based in London, has recently  announced an app that will use "location mapping and gyroscopic sensors" to generate digital models of Lego creations.



The program, "scans" Lego pieces real time and creates 3D models on the fly. Using sophisticated algorithms 3D Lego structures seem to be translated into surfaces, such as walls and roofs. The final stage appears to smooth out corners and curves to produce 3d printable objects which can be send directly for 3D printing.


Read more »

Barbican: Before and After the Blitz

barbican_detail

Here is an interesting concept by illustrator Russell Bell. He’s taken a pre-WWII (World War 2) monochrome map of the Barbican area of London (the northern-most part of the ancient City of London) and incorporated a modern, coloured map of the main structures that form the Barbican Estate, that was built following the area suffering heavy damage during the WWII Blitz. During the building process, the street layout fundamentally changed, with streets disappearing or changing alignment, and a new lake appearing. By including the modern map as a translucent overlay on the original, the viewer can clearly contrast the old and the new. It’s worth noting that the new is already changing, as a number of the (non-residential) post-war blocks along London Wall, and Milton Court, have already been demolished for further development.

Russell has made a number of prints of his map, see his online shop.

The Barbican Estate’s multi-levelled structure and maze of “highwalks” means it’s famously being difficult to navigate (which makes it a great orienteering venue), despite various lit maps being available throughout the complex. At one point, famously, orange lines were painted on the ground, to help lead people to the Barbican Arts Centre from the entrances to the estate.

Thanks to Russell for the heads-up.

barbican_overview

Neoliberal addresses

What does addresses got to do with economic theory and political dogma? turn out that quite a lot. As I was looking at the latest press release from the cabinet office, proudly announcing that the government is investing in (yet another) UK address database, I realised that the handling of UK addresses, those deceivingly simple ‘221b Baker St NW1 6XE‘ provide a parable for the stupidity of neoliberalism.

To avoid doubt: this is not about Open Addresses UK. It’s about the systemic failures of the past 20 years. 

Also for avoidance of doubt, my views are similar to Richard Murphy about the joy of tax. I see collective action and common investment in national assets through taxation as a wonderful thing, and I don’t mind R&D investment being spent on infrastructure that might fail – it’s true for Beagle 2 as much as it’s true for a national address database. So you won’t see here ‘this is a waste of taxpayers money’. It’s the systemic issues that I question here. 

Finally, If I got some specific details of the history of the development wrong – I’m happy to be stand corrected!

The starting point must be to understand what is the point in address database. The best explanation is from one of the top UK experts on this issue – Bob Barr (OBE). Bob identified ‘Core Reference Geographies‘ which have the following characteristics: Definitive; Should be collected and maintained once and used many times; Are Natural monopolies; Have variable value in different applications; and, Have highly elastic demand. We can also call these things ‘Commons‘ because the way we want people to be able to share them while protecting their future – and ideally avoid ‘tragedy of the commons‘.

© Copyright Roger Templeman and licensed for reuse under this Creative Commons Licence

© Copyright Roger Templeman and licensed for reuse under this Creative Commons Licence

Addresses are such ‘core reference geography’. Think about all the applications for a single, definitive database of all UK addresses – it can be used to send the post, plan the census, dispatch emergency services, deliver a broadband link to the right property, check for fraud during purchase transactions, and much more. To make sense of the address above, you need to have geographical location, street name and house number and postcode. Ordnance Survey map can be used to set the location, the street name is set by the local authority and the postcode by the Royal Mail. Merge these sources with a few other bits of information and in principle, you can have a definitive set. Do it for the whole country and you have this ‘core reference geography’, which sounds simple…

The story is a bit more complex – as long as information was not digitised and linked, mismatches between addresses from different sources was not a huge problem, but in the mid 1990s, because of the use of digital records and databases, it became important to have a common way to link them. By that time, the Post Office Postal Address File (PAF) became the de facto definitive address database. Actually, it’s been around since the 1970s, used by the Post Office not as a definitive address database, but to serve internal needs of mail delivery. However, in the absence of any other source, people started to using it – for example, in statistical studies (e.g. this paper from 1988). While I can’t find a specific source for the history of PAF, I guess that at some point, it became a product that is shared with other organisations and sold for direct marketing companies and other users. Naturally, it wouldn’t be what you would design as the definitive source if you start all over again, but it was there, and it was good enough, so people used it.

Without raising false nostalgia about the alternatives, imagine that the need for definitive address database happened at a time when all the entities that are responsible for the elements of an address were part of the public sector. There would be plenty of power struggles, feet dragging, probably cross-departmental animosity and all sort of other obstacles. However, as been proven time and again – when it is all inside the sphere of government control, reorganisation is possible. So you could imagine that at the end of the day, you’d get ‘address directorate’ that manage addresses as national commons.

Now, we can get to the core of the story. Let’s look at the definition of neoliberalism that I want to use here. The definition is from a very good article on the Daily Kos that uses the definition ‘Neoliberalism is a free market economic philosophy that favors the deregulation of markets and industries, the diminution of taxes and tariffs, and the privatization of government functions, passing them over to private business.’ In terms of the political dogma that came with it, it is seeing market solutions as the only solution to societal issues. In the UK, this form of thinking started in the 1980s.

By the time that GIS proliferated and the need for a definitive address database became clear, the neoliberal approach was in full gear. The different entities that need to share information in order to create this common address database were pushed out of government and were asked to act in quasi-commercial way, at which point, the people who run them are instructed to maximise the self-interest of the entity and and market their products at prices that ‘the market will bare’. However, with no alternatives and necessity to use definitive information, pricing is tricky. In terms of sharing information and creating a common product, such entities started bickering over payments, intellectual property and control. The Ordnance Survey had Address-Point, the Post Office/Royal Mail had the PAF, and while being still de facto datasets, no satisfactory definitive database emerged. You couldn’t get beyond this point as the orgnaisational structure requires each organisation to hold to their ‘property’, so while the need became clearer, the solution was now more difficult. 

In the second round, what looks like a good bottom-up approach was proposed. The idea was the local authorities are the best source of information to create a definitive address database (National Land and Property Gazetteer) because they are the closest to the changes on the ground and can manage them. However, we are under neoliberal dogma, so the whole thing need to operate commercially, and you go for a public/private partnership for that. Guess what? It didn’t work.

Third round, you merge the company from the second round with entity from the first round to create another commercial partnership. And  you are still stuck, because fundamentally, there is still the demand to control assets in order to sell them in the market.

Fourth and something that deserve as the most idiotic step in the story is the privatisation of the Royal Mail, which need to maintain ‘assets’ in order to be ‘attractive for investors’ so you sell the PAF with it. It all work within neoliberal logic but the implications is that instead of just dealing with a network of public owned bodies which it is possible to dictate what they should do, you now have it in the private sector, where intellectual property is sacred.

In the final stage, you think: oh, I got a solution, let’s create a new entity that will crowdsource/reuse open data, however, you are a good neoliberal and you therefore ask it to come up with a business model. This time it will surely work, ignoring the huge effort to build business models and all the effort that was invested into trying to pay for a sustainable address databases in the past 20 years. This time it’s going to work.

Let’s ask then, if we do believe in markets so much, should we expect to see a competitor address database to PAF/Address-Point/NLPG appearing by now? Here we can argue that it’s an example for ‘market failure‘ – the most obvious kind is when you can see lack of investment or interest from ‘participants in the market’ to even start trading.

If indeed it was all about free markets and private entrepreneurial spirit, you might expect to see several database providers competing with one another, until, eventually, one or two will become the dominant (the ‘natural monopoly’ above) and everyone use their services.  Building such a database in the era of crowdsourcing should be possible. Just like with the early days of OpenStreetMap, you don’t want ‘contamination’ by copying information from a source that holds database rights or copyright over the information that you use. So we want cases of people voluntarily typing in their addresses, while the provider collate the raw data. Inherently, the same way that Google crowdsource queries because people are typing it and giving the text to Google for use, so does anyone who type their delivery address in Amazon.co.uk. This is crowdsourced addresses – not copied from an external dataset, so even if, for the aim of error checking the entry is tested against PAF, they are not derivatives. Take all these addresses, clean and organise them, and you should have a PAF competitor that was created by your clients.

So Amazon is already an obvious candidate for creating it from ‘passive crowdsourcing’ as a side effect of their day to day operations. Who else might have a database that came from people inputting addresses in the UK to a degree that the body can create a fairly good address database? It doesn’t take a lot of thinking to realise that there are plenty.   Companies that are operating at a scale like Amazon probably got a very high percentage of addresses in the UK. I’d guess that also Experian will have it for their credit checks, and Landmark is in a very good place because of all the property searches. You can surely come with many more. None of these companies is offering a competition to PAF, so that tells you that commercially, no private sector company is willing to take the risk and innovate with a product. That’s understandable, as there is the litigation risk from all the messy group of quasi-public and private bodies that see addresses as their intellectual property. The end result: there is private sector provision of address database.

And all the while, nobody is daring to think about nationalising the database, force, by regulation and law that all these quasi-commercial bodies work together regardless of their ways of thinking. And it’s not that nationalisation is impossible – just check how miraculously Circle Healthcare is ‘exit private contract‘ (because the word nationalisation is prohibited in neoliberal dogma).

To avoid trolling from open data advocates: I wish the best to Open Addresses UK. I think that it’s a super tough task and it will be great to see how it evolves. If, like OSM, one of the companies that can crowdsource addresses can give them their dirty data, it is possible that they build a database fast. This post is not a criticism of Open Address UK, but all the neolibral dogmatic people who can’t simply go for the most obvious solution: take the PAF out of Royal Mail and give it to Open Addresses. Considering the underselling of the shares, there is an absolute financial justification to do so, but that’s why I pointed the sanctity of private companies assets…

So the end result: huge investment by government, failing again and again (and again) because they insist on neoliberal solutions instead of the obvious treatment of commons – hold them by government and fund them properly.

 

 

 


Bad Maps

<rant> Three maps with glaring errors which I came across yesterday. I’m hesitant to criticise – many of my own maps have, I am sure, issues too (i.e. my Electric Tube map, on the right, is deliberately way off.) But I couldn’t resist calling out this trio which I spotted within a few hours of each other.

1. Global Metropolitan Urban Area Footprints

footprints

This is, in itself, a great concept. I particularly like that the creator has used the urban extent rather that administrative boundaries, which rarely follow the true urban extent of a city. The glaring error is scale. It looks like the creator traced the boundaries of each city’s urban extent in Google Maps (aerial view) or similar. All well and good, but a quirk of representing a 3D globe on a 2D “slippy” map means that the scale in Google Maps (and OpenStreetMap and other maps projected to “WebMercator”) varies with latitude, at a fixed zoom level. This hasn’t been accounted for in the graphic, with the result that all cities near the equator (i.e. most of the Asian and African ones) are shown on the map smaller relative to the others, while cities near the poles (e.g. London, Paris, Edmonton, Toronto) are shown misleadingly big. This is a problem because the whole point of the graphic is to compare footprints (and populations) of the major cities. In fact, many of those Chinese and African cities are quite a bit bigger relative to, for example, London, than the graphic suggests.

2. Where Do All The Jedi Live?

religions

The map is in the Daily Mirror (and their online new media) so it doesn’t need to be a pinnacle of cartographic excellence – just a device to get a story across.However, Oxford and Mid Sussex – 40% of the datapoints – are shown in the wrong place – both are much closer to London than the map suggests. The author suggests they did this to make the text fit – but there better ways to accommodate text while having the centroid dots in the correct location. It might take a little longer but then it wouldn’t be – quite simply – wrong. I’m somewhat disappointed that the Mirror not only stoops to the level of Fox News in the accuracy of their mapping, but appears to have no problem with maintaining such an error, even when readers point it out. It’s sloppy journalism and a snub to the cartographic trade, that just relocating whole cities for artistic purposes is not an issue, particularly as so many people in the UK have relatively poor spatial literacy and so can be potentially easily manipulated.

3. A London map…

breakfasts

I’m not really sure where to begin here. I’m not sure if any of the features are in fact in the right place!

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

27/01/15 Imagine Cambridge in 2065: latest from Cambridge Network’s CEO – Cambridge Wireless (press release)


Cambridge Wireless (press release)

27/01/15 Imagine Cambridge in 2065: latest from Cambridge Network's CEO
Cambridge Wireless (press release)
... UK Government Chief Scientific Advisor; Professor Frank Kelly, Master, Christ's College; Professor Sir Alan Wilson, Professor of Urban and Regional Systems in the Centre for Advanced Spatial Analysis, UCL; and Bronwen Maddox, Editor, Prospect (Chair).

and more »

The Rivers of London

walter_rivers_4

This is a new work by Stephen Walter, in his characteristic hand-annotated, monochromatic style. It features London’s watery features, in particular the many waterways. The iconic River Thames (which should always appear on any good London map) is the natural centrepiece, but London has numerous more minor rivers, streams, channels and culverts which form the base of this work. The choice of water is a particularly good one – London’s waterways are scattered throughout the capital, rather than intensifying in the centre as tube lines and houses (the focus of some of the artist’s previous works) do. This results in reasonably even areas of white space, complementing the intense detail for the rivers (and surrounding lands) themselves that are a hallmark of Stephen’s style. The completed work therefore doesn’t overwhelm with information or feel unevenly cramped, so it is rather pleasing to the eye as a complete piece. The artist has used several different font styles to denote different kinds of features, and included various historical annotations, such as marks of major floods.

The work can be viewed on the TAG Fine Arts website or at their studio in Islington, where it can also be purchased, as part of an edition of 50.

If you are a long-time Mapping London reader and are thinking that this style looks familiar, you’d be right, we’ve featured works by Stephen Walter a couple of times before. We’ve also previous featured maps of underground rivers and even tube-style maps of the waterways.

As an aside, prolific London blogger Diamond Geezer‘s 2015 project is walking the Unlost Rivers of London, many of whom are included on the map here.

Thank you to TAG Fine Arts for the complementary ticket to the London Art Fair, where The Rivers of London was on display, along with several over map-related artworks, such as some works by Adam Dant. The photographs here are from their website.

walter_rivers_overview

Imagine Cambridge in 2065: latest from the Network’s CEO – Cambridge Network


Cambridge Network

Imagine Cambridge in 2065: latest from the Network's CEO
Cambridge Network
... UK Government Chief Scientific Advisor; Professor Frank Kelly, Master, Christ's College; Professor Sir Alan Wilson, Professor of Urban and Regional Systems in the Centre for Advanced Spatial Analysis, UCL; and Bronwen Maddox, Editor, Prospect (Chair).

OpenLayers 3 and Vector Data

As part of a project to move most of my OpenLayers 2-powered websites to OpenLayers 3, I have recently converted two more – DataShine: Travel to Work Flows and the North/South Interactive Map. Unlike the main DataShine: Census website, both of these newer conversions include vector geospatial data, so there was additional learning involved during the migration process, mainly relating to vector styling.

northsouth2North/South Interactive Map

For the North/South Interactive Map, I made use of the loading in of remote GeoJSON files.

Vector Layers

Here’s a vector layer:

layerPoints = new ol.layer.Vector({
    source: pointSource,
    style: function(feature, res) { return pointStyle(feature, res); }
});

The pointSource is a ol.source.GeoJSON, which requires the projection of the files to be defined, as well as that to be displayed, when defining the source for the Vector layer:
pointSource = new ol.source.GeoJSON({
    url: '...',
    defaultProjection: 'EPSG:4326',
    projection: 'EPSG:3857',

    attributions: [ new ol.Attribution({ 'html': "..." }) ]
});

If you wish to do further operations on your data once it is loaded in, you need to add a listener to a remotely loaded (e.g. GeoJSON file) source included within a Vector layer:

pointSource.once('change', function()
{
    if (pointSource.getState() == 'ready')
    { var features = pointSource.getFeatures(); ... }
};

Here’s a typical style function. I’m using a property “highlight” on my feature to style such features differently:

function pointStyle(feature, resolution)
{
    return [
        new ol.style.Style({
            image: new ol.style.Circle({
                radius: (feature.highlight ? 7 : feature.radius ),
                fill: new ol.style.Fill({ color: feature.fillColor }),
                stroke: new ol.style.Stroke({ width: feature.strokeWidth, color: '#fff' })
            }),
            text: new ol.style.Text({
                text: (feature.highlight ? feature.label : ""),
                font: '9px Ubuntu, Gill Sans, Helvetica, Arial, sans-serif',
                fill: new ol.style.Fill({ color: '#fff' })
            })
        })
    ]
};

Interactions

To detect clicks, I used an ol.interaction.Select – N.B. if you don’t specify which layers it applies to, it tries to apply them to all Vector layers!

var selectClick = new ol.interaction.Select({
    condition: ol.events.condition.click,
    style: function(feature, res) { return pointStyle(feature, res); },
    layers: [layerPoints]
});

selectClick.getFeatures().on('change:length', function(e)
{ ... }

olMap.addInteraction(selectClick);

In my function here I remove the flag from any already highlighted features and call features[i].changed(); to get the non-highlighed style. You don’t need to call this on what you’ve actually clicked on, as this is done implicitly. here’s likely better ways of showing selected/highlighted features, using ol.FeatureOverlay, but i couldn’t get this to work.

coordinates

MousePosition

There’s quite a nice new utility function which means it was little effort to get an “old style” location indicator in, at the bottom of the North/South interactive:
new ol.control.MousePosition({ projection: "EPSG:4326", coordinateFormat: ol.coordinate.toStringHDMS, className: 'olControlMousePosition' })

ttwf

DataShine: Travel to Work Flows

This loads vector data in as generic JSON through a regular (non-OL) AJAX call rather than GeoJSON so the processing is a bit more manual. This time, my source for the Vector layer is a simple ol.source.Vector which can be emptied with source.clear(); and reused.

I’m creating lines directly from the JSON, converting from OSGB grid and specifying colour (for the style) as I go – note my use of rgba format, allowing me to specify a partial transparency (of 60%) for the lines:

var startLL = ol.proj.transform([data[start][2], data[start][3]], "EPSG:27700", "EPSG:3857");
var endLL = ol.proj.transform([data[end][2], data[end][3]], "EPSG:27700", "EPSG:3857");
var journeyLine = new ol.geom.LineString([startLL, endLL]);
var lineItem = new ol.Feature({ geometry: journeyLine });
lineItem.strokeColor = 'rgba(255, 0, 0, 0.4)'; lineSource.addFeature(lineItem);

As previously blogged, I’m also using hand-crafted permalinks in both websites, and drag-and-drop KML display and UTF grid mouseovers in the latter, and both have also had their stylesheets tweaked to allow for easy printing – again made possible with OL3.

I’m about ready now to tackle my most complicated OpenLayers project by far, the Bike Share Map.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Legible London Walking Maps

leglondon1

The Legible London project has been producing clear, attractive maps of parts of London, to help people navigate unfamiliar streets by foot, for a few years now. The maps appear on numerous way-marking plinths around the capital, helping people to get from A to B effectively. During the Olympic Games in 2012, paper Legible London maps were made available at key stations, to encourage people to walk rather than overload the tube/train network, but generally, the maps are not available online. Recently, however, the project has created maps for several of London’s signed long-distance walks, and these are available for download.

As an example I’ve picked Section 7 of the Jubilee Greenway, one of London’s long-distance paths that was put together for the Queen’s Diamond Jubilee, also celebrated in 2012. This particular path spends quite a lot of time on the Greenway (aka the Northern Outflow Sewer!) in east London, but also includes some more classically touristy sections. Section 7 is a nice balance between the industrial and touristy parts of London, going through the industrial/changing Deptford waterfront and residential Rotherhithe but also passing the Cutty Sark of Maritime Greenwich and Tower Bridge. Direct link to the PDF.

The maps use a clear and consistent colour theme, with a relatively small number of colours resulting in attractive cartography. Only major buildings and landmarks are shown, in yellow, with a selection shown in 3D on some of the inset maps. The route is shown clearly, with a red line, and with links to stations, and diversions, as red dashes.

You can download section maps for most of London’s long-distance paths, on a new part of the TfL website.

Hat-tip to Diamond Geezer for spotting the new maps.

leglondon2

Geographic Information Science and Citizen Science

Thanks to invitations from UNIGIS and from Edinburgh Earth Observatory / AGI Scotland, I had an opportunity to reflect on how Geographic Information Science (GIScience) can contribute to citizen science, and what citizen science can contribute to GIScience.

Despite the fact that it’s been 8 years since the term Volunteers Geographic Information (VGI) was coined, I don’t assume that all the audience is aware of how it came about. I also don’t assume knowledge of citizen science, which is far less familiar term within GIScience. Therefore, before going into a discussion about the relationship between the two areas, I start with a short introduction to both, starting with VGI, and then moving to citizen science. After introduction to the tow areas, I’m suggesting the relationships between them – there are types of citizen science that are overlapping VGI – biological recording and environmental observations, as well as community (or civic) science, while other types, such as volunteer thinking there are many projects that are non-geographical (think EyeWire or Galaxy Zoo).

However, I don’t just list a catalogue of VGI and citizen science activities. Personally, I found trends a useful way to make sense of what happen. I’ve learned that from the writing of Thomas Friedman, who used it is several of his books to help the reader understand where the changes that he covers came from. Trends are, of course, speculative, as it is very difficult to demonstrate causality or to be certain about the contribution of each trends to the end result. With this caveats in mind, there are several technological and societal trends that I used in the talk to explain how VGI (and the VGI element of citizen science) came from.

Of all these trends, I keep coming back to one technical and one societal that I see as critical. The removal of selective availability of GPS in May 2000 is my top technical change, as the cascading effect led to deluge of good enough location data which is behind the two areas. On the societal side, it is the Flynn effect as a signifier of the educational shift in the past 50 years that explains how the ability to participate in scientific projects have changed.

In terms of the reciprocal contributions between the fields, I suggest the following:

GIScience can support citizen science by considering data quality assurance methods that are emerging in VGI, there are also plenty of Spatial Analysis methods that take into account heterogeneity and therefore useful for citizen science data. The areas of geovisualisation and human-computer interaction studies in GIS can assist in developing more effective and useful applications for citizen scientists and people who use their data. There is also plenty to do in considering semantics, ontologies, interoperability and standards. Finally, since critical GIScientists have been looking for a long time into the societal aspects of geographical technologies such as privacy, trust, inclusiveness, and empowerment, they have plenty to contribute to citizen science activities in how to do them in more participatory ways.

On the other hand, citizen science can contribute to GIScience, and especially VGI research, in several ways. First, citizen science can demonstrate longevity of VGI data sources with some projects going back hundreds of years. It provides challenging datasets in terms of their complexity, ontology, heterogeneity and size. It can bring questions about Scale and how to deal with large, medium and local activities, while merging them to a coherent dataset. It also provide opportunities for GIScientists to contribute to critical societal issues such as climate change adaptation or biodiversity loss. It provides some of the most interesting usability challenges such as tools for non-literate users, and finally, plenty of opportunities for interdisciplinary collaborations.

The slides from the talk are available below.


Postdoctoral Research Associate – Quantitative Population Geography

 
There is a research associate opportunity in quantitative population geography at the University of Liverpool. The post details are as follows:
 
Postdoctoral Research Associate
£32,277 pa
Faculty of Science and Engineering, School of Environmental Sciences, Department of Geography and Planning
Location: University Campus
Ref: R-587244/WWW
 
Closing date for receipt of applications: Fri, 23 Jan 2015 17:00:00 GMT
 
This exciting opportunity arises from a recent ESRC Secondary Data Analysis Initiative Phase 2 award, to support a project which focuses on geographic inequalities in the UK and how these have changed over the last 40 years. The project will involve the development of a set of population surfaces for a wide array of socio-economic and demographic variables for the UK Censuses of 1971-2011. These population surfaces enable assessment of changes over small geographical areas. The production of surfaces will allow detailed analysis of, for example, the persistence of social deprivation at the neighbourhood scale or the ways in which housing tenures have changed across the regions of the UK. You should have a PhD in Population Geography, Geographic Information Science, or the broader Social Sciences (with a quantitative focus). Experience in manipulating large datasets and some programming experience would also be desirable. The post is available until 31 July 2016.
 
For more information, please see: http://www.jobs.ac.uk/job/AKG036/postdoctoral-research-associate/

The latest outputs from researchers, alumni and friends at UCL CASA