Opportunities at St Andrews for Population Researchers

Three positions in Human Geography – ML1317
Details:

The Department of Geography and Sustainable Development at St Andrews invites
applications for three posts in Human Geography (from Lecturer to Professor). Exceptional
candidates will be considered for Reader or Professorial positions. We welcome applications
from candidates at all career stages who are, or have the potential to be, world leading in their
particular specialism.

The successful candidates may have expertise in any area of human geography. Our desire is
to appoint individuals with outstanding research capacity whatever their specialism, although
expertise in population/health or cities/neighbourhoods may be an advantage. Experience of
advanced quantitative methods is desired for at least one of the posts. You will have the
opportunity to engage with staff working on large, externally-funded initiatives – the Centre
for Population Change (http://www.cpc.ac.uk/) and the Census and Administrative Data
Longitudinal Studies hub (http://calls.ac.uk/), whilst those with interests in cities and
neighbourhoods will be encouraged to develop links with the Centre for Housing Research
(http://www.st-andrews.ac.uk/chr) within the Department. You will also contribute to the
Geography teaching programme as appropriate.

Informal enquiries: you are welcome to discuss any of the posts informally with Prof Allan
Findlay (Allan.M.Findlay@st-andrews.ac.uk; tel +44 (0)1334 464011), Prof Elspeth Graham
(efg@st-andrews.ac.uk; tel +44 (0)1334 463908), or Prof Colin Hunter (Head of
Department/co-Head of School; ch69@st-andrews.ac.uk; tel +44 (0)1334 464017). The
research interests and recent publications of current members of staff in Geography and
Sustainable Development can be found on our website (http://www.st-andrews.ac.uk/gsd/).
Interview Date: Interviews for short-listed candidates will be held in November
2014. Successful candidates will be expected to start as soon as possible and not later than
August 2015.

Please indicate clearly in your application which post(s) you are applying for:
Lecturer – ML1317
Reader/Professor (2) – ML1283
Closing Date: Monday 6 October 2014
For further information see Further Particulars ML1317AC FPs.doc

Pigeon Sim goes to LonCon3

The pigeon sim visited the LonCon3 72nd World Science Fiction convention at the ExCeL centre last week. The event covered Thursday 14th to Monday 18th, with the weekend looking like the best part as lots of people were dressed up as science fiction characters. On the Friday shift we did get Thor on the pigeon sim, along with batman and spiderman though.

steve-IMG_20140813_160435

 

The picture above shows the build after Steve and Stephan had finished putting it all together on the Wednesday afternoon.

IMG_20140815_131901

IMG_20140815_140311

 

The white tents show the “village” area, while we were in the exhibitors’ area which is the elevated part at the top of the pink steps on the right. Our pigeon sim exhibit was about two rows behind the steps on the elevated level.

I’m slightly disappointed that Darwin’s pigeons (real live ones) weren’t arriving until Saturday, so I missed them, but the felt ones were very good:

IMG_20140815_131450

 

And finally, you can’t have a science fiction event without a Millennium Falcon made out of Lego:

IMG_20140815_140611

Made by Craig Stevens, it even folds in half so you can see inside: http://www.loncon3.org/exhibits.php#70 and http://modelmultiverse.com

OpenStreetMap studies (and why VGI not equal OSM)

As far as I can tell, Nelson et al. 2006 ‘Towards development of a high quality public domain global roads database‘ and Taylor & Caquard 2006 Cybercartography: Maps and Mapping in the Information Era are the first peer review papers that mention OpenStreetMap. Since then, OpenStreetMap received plenty of academic attention. More ‘conservative’ search engines such as ScienceDirect or Scopus find 286 and 236 peer review papers that mention the project (respectively). The ACM digital library finds 461 papers in the areas that are relevant to computing and electronics, while Microsoft Academic Research find only 112. Google Scholar lists over 9000 (!). Even with the most conservative version from Microsoft, we can see an impact on fields ranging from social science to engineering and physics. So lots to be proud about as a major contribution to knowledge beyond producing maps.

Michael Goodchild, in his 2007 paper that started the research into Volunteered Geographic Information (VGI), mentioned OpenStreetMap (OSM), and since then there is a lot of conflation between OSM and VGI. Some recent papers you can find statements such as ‘OpenstreetMap is considered as one of the most successful and popular VGI projects‘ or ‘the most prominent VGI project OpenStreetMap‘ so at some level, the boundary between the two is being blurred. I’m part of the problem – for example, in the title of my 2010 paper ‘How good is volunteered geographical information? A comparative study of OpenStreetMap and Ordnance Survey datasetsHowever, the more I was thinking about it, the more I am uncomfortable with this equivalence. I would think that the recent line from Neis & Zielstra (2013) is more accurate: ‘One of the most utilized, analyzed and cited VGI-platforms, with an increasing popularity over the past few years, is OpenStreetMap (OSM)‘. I’ll explain why.

Let’s look at the whole area of OpenStreetMap studies. Over the past decade, several types of research papers emerged.

There is a whole set of research projects that use OSM data because it’s easy to use and free to access (in computer vision or even string theory). These studies are not part of ‘OSM studies’ or VGI, as for them, this is just data to be used.

Edward Betts. CC-By-SA 2.0 via Wikimedia Commons

Second, there are studies about OSM data: quality, evolution of objects and other aspects from researchers such as Peter Mooney, Pascal Neis, Alex Zipf  and many others.

Thirdly, there are studies that also look at the interactions between the contribution and the data – for example, in trying to infer trustworthiness.

Fourth, there are studies that look at the wider societal aspects of OpenStreetMap, with people like Martin Dodge, Chris Perkins, and Jo Gerlach contributing in interesting discussions.

Finally, there are studies of the social practices in OpenStreetMap as a project, with the work of Yu-Wei Lin, Nama Budhathoki, Manuela Schmidt and others.

[Unfortunately, due to academic practices and publication outlets, a lot of these papers are locked behind paywalls, but this is another issue... ]

In short, this is a significant body of knowledge about the nature of the project, the implications of what it produces, and ways to understand the information that emerge from it. Clearly, we now know that OSM produce good data and know about the patterns of contribution. What is also clear that the patterns are specific to OSM. Because of the importance of OSM to so many applications areas (including illustrative maps in string theory!) these insights are very important. Some of them are expected to be also present in other VGI projects (hence my suggestions for assertions about VGI) but this need to be done carefully, only when there is evidence from other projects that this is the case. In short, we should avoid conflating VGI and OSM.


How making London greener could make Londoners happier – interactive map – The Guardian


How making London greener could make Londoners happier – interactive map
The Guardian
London – with all its tarmac, brick and glass – is actually 38.4% open space and ranks as the world's third greenest major city. Now Daniel Raven-Ellison wants to go further … and make Greater London a national park. His campaign and online petition ...

Tiling the Blue Marble

Following on from my last post about level of detail and Earth spheroids, here is the NASA Blue Marble texture applied to the Earth:

The top level texture is the older composite Blue Marble which shows blue oceans and a very well-defined North Pole ice sheet. Once the view zooms in, all subsequent levels of detail show the next generation Blue Marble from January 2004 with topography [link]. Incidentally, the numbers on the texture are the tile numbers in the format: Z_X_Y, where Z is the zoom level, so the higher the number, the more detailed the texture map. The green lines show the individual segments and textures used to make up the spheroid.

In order to create this, I’ve used the eight Blue Marble tiles which are each 21,600 pixels square, resulting in a full resolution texture which is 86,400 x 43,200 pixels. Rather than try and handle this all in one go, I’ve added the concept of “super-tiles” to my Java tiling program. The eight 21,600 pixel Blue Marble squares are the “super-tiles”, which themselves get tiled into a larger number of 1024 pixel quad tree squares which are used for the Earth textures. The Java class that I wrote to do this can be viewed here: ImageTiler.java. As you can probably see from the GitHub link, this is part of a bigger project which I was originally using to condition 3D building geometry for loading into the globe system. You can probably guess from this what the chunked LOD algorithms are going to be used for next?

Finally, one thing that has occurred to me is that tiling is a fundamental algorithm. Whether it’s cutting a huge texture into bits and wrapping it around a spheroid, or projecting 2D maps onto flat planes to make zoomable maps, the necessity to reduce detail to a manageable level is essential. Even the 3D content isn’t immune from tiling as we end up cutting geometry into chunks and using quad tree or oct tree algorithms. Part of the reason for this rests with the new graphics cards, which mean that progressive mesh algorithms like ROAM (Duchaineau et al) are no longer effective. Old progressive mesh algorithms would use CPU cycles to optimise a mesh before passing it on to the graphics card. The situation now with modern GPUs is that using a lot of CPU cycles to make a small improvement to a mesh before sending it to a powerful graphics card doesn’t result in a significant speed up. Chunked LOD works better, with blocks of geometry being loaded in and out of GPU memory as required. Add to this the fact that we’re working with geographic data and spatial indexing systems all the time and solutions to the level of detail problem start to present themselves.

 

Links:

NASA Blue Marble: http://visibleearth.nasa.gov/view_cat.php?categoryID=1484

Image Tiler: https://github.com/maptube/FortyTwoGeometry/blob/master/src/fortytwogeometry/ImageTiler.java

ROAM paper: http://dl.acm.org/citation.cfm?id=267028 (and: http://www.cognigraph.com/ROAM_homepage/ )

Happy 10th Birthday, OpenStreetMap!

Today, OpenStreetMap celebrates 10 years of operation as counted from the date of registration. I’ve heard about the project when it was in early stages, mostly because I knew Steve Coast when I was studying for my Ph.D. at UCL.  As a result, I was also able to secured the first ever research grant that focused on OpenStreetMap (and hence Volunteered Geographic Information – VGI) from the Royal Geographical Society in 2005. A lot can be said about being in the right place at the right time!

OSM Interface, 2006 (source: Nick Black)

OSM Interface, 2006 (source: Nick Black)

Having followed the project during this decade, there is much to reflect on – such as thinking about open research questions, things that the academic literature failed to notice about OSM or the things that we do know about OSM and VGI because of the openness of the project. However, as I was preparing the talk for the INSPIRE conference, I was starting to think about the start dates of OSM (2004), TomTom Map Share (2007), Waze (2008), Google Map Maker (2008).  While there are conceptual and operational differences between these projects, in terms of ‘knowledge-based peer production systems’ they are fairly similar: all rely on large number of contributors, all use both large group of contributors who contribute little, and a much smaller group of committed contributors who do the more complex work, and all are about mapping. Yet, OSM started 3 years before these other crowdsourced mapping projects, and all of them have more contributors than OSM.

Since OSM is described  as ‘Wikipedia of maps‘, the analogy that I was starting to think of was that it’s a bit like a parallel history, in which in 2001, as Wikipedia starts, Encarta and Britannica look at the upstart and set up their own crowdsourcing operations so within 3 years they are up and running. By 2011, Wikipedia continues as a copyright free encyclopedia with sizable community, but Encarta and Britannica have more contributors and more visibility.

Knowing OSM closely, I felt that this is not a fair analogy. While there are some organisational and contribution practices that can be used to claim that ‘it’s the fault of the licence’ or ‘it’s because of the project’s culture’ and therefore justify this, not flattering, analogy to OSM, I sensed that there is something else that should be used to explain what is going on.

TripAdvisor FlorenceThen, during my holiday in Italy, I was enjoying the offline TripAdvisor app for Florence, using OSM for navigation (in contrast to Google Maps which are used in the online app) and an answer emerged. Within OSM community, from the start, there was some tension between the ‘map’ and ‘database’ view of the project. Is it about collecting the data so beautiful maps or is it about building a database that can be used for many applications?

Saying that OSM is about the map mean that the analogy is correct, as it is very similar to Wikipedia – you want to share knowledge, you put it online with a system that allow you to display it quickly with tools that support easy editing the information sharing. If, on the other hand, OSM is about a database, then OSM is about something that is used at the back-end of other applications, a lot like DBMS or Operating System. Although there are tools that help you to do things easily and quickly and check the information that you’ve entered (e.g. displaying the information as a map), the main goal is the building of the back-end.

Maybe a better analogy is to think of OSM as ‘Linux of maps’, which mean that it is an infrastructure project which is expected to have a lot of visibility among the professionals who need it (system managers in the case of Linux, GIS/Geoweb developers for OSM), with a strong community that support and contribute to it. The same way that some tech-savvy people know about Linux, but most people don’t, I suspect that TripAdvisor offline users don’t notice that they use OSM, they are just happy to have a map.

The problem with the Linux analogy is that OSM is more than software – it is indeed a database of information about geography from all over the world (and therefore the Wikipedia analogy has its place). Therefore, it is somewhere in between. In a way, it provide a demonstration for the common claim in GIS circles that ‘spatial is special‘. Geographical information is infrastructure in the same way that operating systems or DBMS are, but in this case it’s not enough to create an empty shell that can be filled-in for the specific instance, but there is a need for a significant amount of base information before you are able to start building your own application with additional information. This is also the philosophical difference that make the licensing issues more complex!

In short, both Linux or Wikipedia analogies are inadequate to capture what OSM is. It has been illuminating and fascinating to follow the project over its first decade,  and may it continue successfully for more decades to come.


Visualising similarity

A model of the global economy is, by its very nature, an unwieldy object to work with. There are 40 countries (we want more; that’s coming next) and the economy of each country is described by the economic activity of 35 sectors.

Each sector in each country interacts with each other sector in each other country creating close to two million interactions.

This is great for wowing potential users of the model with the sheer scale and size of thing, but it makes life pretty hard if you want to ask a question like “what effect has a certain change had on… well, everything?”

This is hard because “everything” here encompasses two million numbers some of which will have gone up and others of which will have gone down.

If you don’t put any effort into visualisation, the output of the model looks absolutely horrible:

Output of a World Input-Output Table

Needless to say, picking interesting information out of such a mass of numbers involves some careful thought. (For the interested, what you’re seeing here is dollar-valued commodity flows between sectors within the Australian economy, the sectors being numbered 1 to 35.)

The paper I’m writing at the moment asks an even trickier question than “what’s going on?”. I’m trying to work out how our model compares with other, more standard, ways of doing this kind of thing. This means making the same change in two models and comparing the results.

One way to boil down lots of information into a far smaller number of ‘things’ is to rank the numbers you’re analysing. This just means putting the numbers into order then saying which number is biggest, which is second-biggest etc.

So in our case, if we make a change to the global economy, instead of looking at a horrifying table of numbers we can just say “Australia was the country most affected by the change. Netherlands was second, Spain tenth, Bulgaria 39th…” and so on.

The advantage to this approach is that, when comparing the results of two models, you can just compare the ranks of the countries and see if they’re similar. If they are, you might be justified in concluding that the models are doing more-or-less the same thing.

It also allows for some nice visualisation. If we write down all the countries in one column in the order of their rank (most-affected by some change we’ve made, to least-affected) using one model, and make a second column where the countries are ordered according to their rank using the other model, we can quickly see where the differences are, particularly if we draw nice lines between the countries to show how their position has changed.

Here’s the outcome of such an experiment:

chn_vec_rank_change_wiot_vs_model
The design for this visualisation was inspired by a similar thing in the work of Hidalgo and Hausmann, see here on p4!

It shows the results of reducing demand for Chinese vehicles by $1M on the global economy in 2010. The left-hand column shows the results using a traditional model (for the interested: it’s called a Multi-Region Input-Output model, or MRIO). The most-affected countries are at the top and the least-affected at the bottom. The right-hand column is the same but for our model.

With the exception of Slovakia, the results look pretty good. The ranks are generally pretty similar which is encouraging. We’re currently trying to find out what’s going on with Slovakia, and I’ll post here if we ever find out!

(Note that Taiwan is not in our model, because the UN doesn’t report trade data for it, as it deems it to be a part of China. I won’t be delving into this international controversy here!)

Building Virtual Worlds

The algorithms required to build virtual worlds like Google Earth and World Wind are really fascinating, but building a virtual world containing real-time city data is something that hasn’t yet been fully explored. Following on from the Smart Cities presentation in Oxford two weeks ago, I’ve taken the agent-based London Underground simulation and made some improvements to the graphics. While I’ve seen systems like Three.js, Unity and Ogre used for some very impressive 3D visualisations, what I wanted to do required a lower level API which allowed me to make some further optimisations.

Here is the London Underground simulation from the Oxford presentation:

The Oxford NCRM presentation showed the Earth tiled with a single resolution NASA Blue Marble texture, which was apparent as the view zoomed in to London to show the tube network and the screen space resolution of the texture map decreased.

The Earth texture and shape needs some additional work, which is where “level of detail” (LOD) comes in. The key point here is that most of the work is done by the shader using chunked LOD. If the Earth is represented as a spheroid using, for example 40 width segments and 40 height slices, then recursively divided using either quadtree or octtree segmentation, we can draw a more detailed Earth model as the user zooms in. By using the same number of points for each sub-mesh, only a single index buffer is needed for all LODs and no texture coordinates or normals are used. The shader uses the geodetic latitude and longitude calculated from the Cartesian coordinates passed for rendering, along with the patch min and max coordinates, to get the texture coordinates for every texture tile.

GeoGL_Smartie          GeoGL_WireframeSmartie

The two images above show the Earth using the the NASA Blue Marble texture. The semi-major axis has been increased by 50%, which gives the “smartie” effect and serves to show the oblateness around the equator. The main reason for doing this was to get the coordinate systems and polygon winding around the poles correct.

In order for the level of detail to work, a screen space error tolerance constant (labelled Tau) is defined. The rendering of the tiled earth now works by starting at the top level and calculating a screen space error based on the space that the patch occupies on the screen. If this is greater than Tau, then the patch is split into its higher resolution children, which are then similarly tested for screen space error recursively. Once the screen space error is within the tolerance, Tau, then the patch is rendered.

GeoGL_Earth    GeoGL_OctTreeEarth

The two images above show a correct rendering of the Earth, along with the underlying mesh. The wireframe shows a triangular patch on the Earth at the closest point to the viewer which is double the resolution (highlighted with red line). Octtree segmentation has been used for the LODs.

The code has been made as flexible as possible, allowing all the screen error tolerances, mesh slicing and quad/oct tree tiling to be configured to allow for as much experimentation as possible.

The interesting thing about writing a 3D system like this is that it shows that tiling is a fundamental operation in both 2D and 3D maps. In web-based mapping, 2D, maps are cut into tiles, which are usually 256 pixels square, to avoid having to load massive images onto the web browser. In 3D, the texture sizes might be bigger, but, bearing in mind that Google are reported to be storing around 70TB of texture data for the Earth, there is still the issue of out of core rendering. For the massive terrain rendering systems, management of data being moved between GPU buffers, main memory, local disk and the Internet is the key to performance. My feeling was that I would let Google do the massive terrain rendering and high resolution textures and just concentrate on building programmable worlds that allow exploration, simulation and experimentation with real data.

Finally, here’s a quick look at something new:

VirtualCity

The tiled Earth mesh uses procedural generation to create the levels of detail, so, extending this idea to procedural cities, we can follow the “CGA Shape” methodology outlined in the “Procedural Modeling of Buildings” paper to create our own virtual cities.

 

Useful Links:

http://tulrich.com/geekstuff/chunklod.html

http://tulrich.com/geekstuff/sig-notes.pdf 

http://peterwonka.net/Publications/pdfs/2006.SG.Mueller.ProceduralModelingOfBuildings.final.pdf

In Celebration of Peter Hall

Peter-Reading

Many will know that the world’s greatest planning academic passed away yesterday after a short illness. He was a great friend of CASA convincing me to come and run it in 1995 and doing much to establish and support what we continue to do: in building a science of cities. I will not eulogise Peter’s achievements but simply make this acknowledgement of his great impact. He was my mentor at Reading University in the late 1960s and 1970s where I was a lecturer (in Geography) and he the Professor. He got us all started in 1969 when he won a grant from the Centre for Environmental Studies to build land use transport models, thus nurturing a small community of scholars and practitioners that is still in existence and whose influence is slowly but surely increasing. I was fortunate that he appointed me as a research assistant to this grant in 1969 and much of what we do now in CASA emanates from those days. The picture above was taken two years ago when we celebrated his 80th birthday with a festschrift conference. Dave Foot and Erlet Cater are in the picture and they were key to our work on early land use transport models in those golden years which marked the 1960s

I have blogged the picture of the cake that was baked for Peter’s 80th birthday before but here it is again: in the image of Ebenezer Howard’s Tomorrow: A Peaceful Path to Reform.

PHCakeAnd those wishing to read the festschrift – the book that was produced for him to celebrate his career should get hold of a copy of the book

planning-img

 

Mapped: Journeys to Work

journey_to_work_web

Today the Office for National Statistics released the long awaited journey to work data collected by the 2011 Census in England and Wales. Here it is in all its glory. En masse you can really see the dominance of London in the South East as well as the likes of Manchester, Liverpool and Birmingham further north. If you want to pick out specific flows between areas you can use our “Commute.DataShine” tool developed by Oliver O’Brien.

Mapped: Journeys to Work

journey_to_work_web

Today the Office for National Statistics released the long awaited journey to work data collected by the 2011 Census in England and Wales. Here it is in all its glory. En masse you can really see the dominance of London in the South East as well as the likes of Manchester, Liverpool and Birmingham further north. If you want to pick out specific flows between areas you can use our “Commute.DataShine” tool developed by Oliver O’Brien.

From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy?

Faced with a vast array of choice when it comes to interacting with those around us, our favoured communication medium will often be the simplest, quickest and most immediately available. But as technology continually develops, the impact of modern communication tools on the quality and depth of interpersonal exchange is increasingly the subject of scrutiny. Further, it remains unclear what increased digital interaction will mean for our social relationships. Of course, for some types of interactions we may actively seek out the least empathic means of communication. Text, email and socialmedia are popular means for initiating breakups in intimate relationships; they are simple, allow for a clear message, and importantly, avoid the empathic strain that comes from seeing the consequence of your words face-to-face. Yet for those relationships we want to maintain, the affective impact of our chosen communication method is worth considering.

Break-ups via social media avoid the sharing of empathy associated with visual cues

Break-ups via social media avoid the sharing of empathy associated with visual cues

The social presence theory, introduced by Short, Williams and Christie (1976), argues that the fewer the number of ‘cue systems’ in a communication method, the less ‘warmth and involvement’ users typically feel. This means that as the method of communication used becomes more distanced from ‘fully cued’ face-to-face interaction, the level of empathy felt between users reduces. Applied to computer-mediated communication (CMC), this would imply that video-based interaction would lend itself to greater empathic connection than just voice capabilities, which in turn would be emotionally stronger than just text. As a case-in-point, business executives commonly prefer to make important deals in a face-to-face environment instead of via any digital means; it allows for more opportunity to read and respond to the body language of the other party.

What is more, it is not just the explicit use of technology that impacts the quality of interpersonal conversation. The ‘iPhone effect’ is a commonly recognised problem, occurring when one person looking at their phone in a social environment has a contagious, anti-social effect, ultimately ending all conversation and eye contact. Indeed, even the symbolic nature of a mobile phone appears to be enough to reduce the quality of face-to-face interactions; research has found that the presence of a mobile phone, even when it does not belong to either party, reduces the empathy reportedly felt between two face-to-face communicators. This finding is attributed to a diversion of attention from the immediate exchange towards an item which symbolises instant information and hyper-connectivity, and makes individuals more likely to miss subtle cues, facial expressions and vocal inclinations. Thus, the social presence theory has explanatory power, and intuitively has merit.

However, the huge popularity of online forums, virtual games and online relationships indicate that in some contexts, digital methods may too lend themselves to the expression of empathy. The hyperpersonal model of CMC, introduced by Joseph Walther, proposes a set of processes to explain how CMC may create an environment where digital text-based communication lends itself to greater desirability and intimacy than equivalent offline interactions. Walther’s model outlines four components that contribute to the process: overattributions of recipient similarity; selective self-presentation and disclosure; thoughtful and reflective message construction; and the idealised impressions of others we interact with.

The anonymity that comes from text-based online communication creates a different dynamic for exchanges. In an anonymous environment, users can feel liberated from judgement on their words. Whilst personal profiles on social networks remain ever popular, giving an opportunity to present the individual in whichever way they choose, there has been a trend towards the use of anonymous social media, with the anonymous sharing apps ‘Whisper’ and ‘Pencourage’, dubbed the ‘anti-social real life Facebook’ attracting millions of users. The anonymity that is provided in online communities can facilitate deep, empathic connections, as it allows people to disclose more than they feel they would be able to in real life. This is attributed to reduced vulnerability, where what you may say or do online cannot be associated with the rest of your (offline) life. People do not have to worry about their non-verbal cues when typing a message; the fear of not using the right words, or losing control of ones emotions when speaking, is gone. This process may be particularly prevalent in online support communities, for example cancer support groups, which have been the subject of extensive research into how sensitive online messages are expressed and received.

Yet this anonymity does have a dark side. While the online disinhibition effect in some instances creates an environment for openness and support, the anonymity also lends itself to cyberbullying. ‘Yik Yak’, a localised anonymous sharing app, facilitated cyberbullying in high schools to such a degree that the creators had to respond by geofencing schools so the app could not be accessed on the premises. The mask of anonymity creates an environment where individuals don’t have to own their behaviour and it can be kept entirely separate from their offline identity. There are no repercussions for behaviour, and no clear authority. In addition, users are distanced from seeing the offline reactions to their online behaviour, creating an illusion that the two worlds are separate.

Online anonymity permits freedom to express oneself with no offline repercussions

Online anonymity permits freedom to express oneself with no offline repercussions

For better or for worse, digital communication is here to stay. Constantly developing technology allows for ever-changing methods of interaction with close friends and strangers alike, with huge potential for both increasingly life-like digital interactions and more creative text-based communication – but are these developments necessary to enhance communicative empathy? The dichotomy between the virtues and vices of digitised anonymity tends to sway to one side or the other depending on the context in which anonymity is afforded. The market for increasing empathy in digital interactions is clearly expanding, with more and more social cues being made possible via digital links. The idea of the ‘virtual hug’ has been popular in internet chatrooms for decades; now technology is making this a reality, with devices such as the Kickstarter project ‘Frebble’, a handheld accessory which allows two users to ‘hold hands’ regardless of their physical distance, now on the scene. In these cases, decreased anonymity is the goal. However, it should not be forgotten that there are some situations in which anonymity works positively for the expression of empathy, facilitating deeper disclosure on sensitive topics, where digital disinhibition is critical.

The post From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy? appeared first on CEDE.

From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy?

Faced with a vast array of choice when it comes to interacting with those around us, our favoured communication medium will often be the simplest, quickest and most immediately available. But as technology continually develops, the impact of modern communication tools on the quality and depth of interpersonal exchange is increasingly the subject of scrutiny. Further, it remains unclear what increased digital interaction will mean for our social relationships. Of course, for some types of interactions we may actively seek out the least empathic means of communication. Text, email and socialmedia are popular means for initiating breakups in intimate relationships; they are simple, allow for a clear message, and importantly, avoid the empathic strain that comes from seeing the consequence of your words face-to-face. Yet for those relationships we want to maintain, the affective impact of our chosen communication method is worth considering.

Break-ups via social media avoid the sharing of empathy associated with visual cues

Break-ups via social media avoid the sharing of empathy associated with visual cues

The social presence theory, introduced by Short, Williams and Christie (1976), argues that the fewer the number of ‘cue systems’ in a communication method, the less ‘warmth and involvement’ users typically feel. This means that as the method of communication used becomes more distanced from ‘fully cued’ face-to-face interaction, the level of empathy felt between users reduces. Applied to computer-mediated communication (CMC), this would imply that video-based interaction would lend itself to greater empathic connection than just voice capabilities, which in turn would be emotionally stronger than just text. As a case-in-point, business executives commonly prefer to make important deals in a face-to-face environment instead of via any digital means; it allows for more opportunity to read and respond to the body language of the other party.

What is more, it is not just the explicit use of technology that impacts the quality of interpersonal conversation. The ‘iPhone effect’ is a commonly recognised problem, occurring when one person looking at their phone in a social environment has a contagious, anti-social effect, ultimately ending all conversation and eye contact. Indeed, even the symbolic nature of a mobile phone appears to be enough to reduce the quality of face-to-face interactions; research has found that the presence of a mobile phone, even when it does not belong to either party, reduces the empathy reportedly felt between two face-to-face communicators. This finding is attributed to a diversion of attention from the immediate exchange towards an item which symbolises instant information and hyper-connectivity, and makes individuals more likely to miss subtle cues, facial expressions and vocal inclinations. Thus, the social presence theory has explanatory power, and intuitively has merit.

However, the huge popularity of online forums, virtual games and online relationships indicate that in some contexts, digital methods may too lend themselves to the expression of empathy. The hyperpersonal model of CMC, introduced by Joseph Walther, proposes a set of processes to explain how CMC may create an environment where digital text-based communication lends itself to greater desirability and intimacy than equivalent offline interactions. Walther’s model outlines four components that contribute to the process: overattributions of recipient similarity; selective self-presentation and disclosure; thoughtful and reflective message construction; and the idealised impressions of others we interact with.

The anonymity that comes from text-based online communication creates a different dynamic for exchanges. In an anonymous environment, users can feel liberated from judgement on their words. Whilst personal profiles on social networks remain ever popular, giving an opportunity to present the individual in whichever way they choose, there has been a trend towards the use of anonymous social media, with the anonymous sharing apps ‘Whisper’ and ‘Pencourage’, dubbed the ‘anti-social real life Facebook’ attracting millions of users. The anonymity that is provided in online communities can facilitate deep, empathic connections, as it allows people to disclose more than they feel they would be able to in real life. This is attributed to reduced vulnerability, where what you may say or do online cannot be associated with the rest of your (offline) life. People do not have to worry about their non-verbal cues when typing a message; the fear of not using the right words, or losing control of ones emotions when speaking, is gone. This process may be particularly prevalent in online support communities, for example cancer support groups, which have been the subject of extensive research into how sensitive online messages are expressed and received.

Yet this anonymity does have a dark side. While the online disinhibition effect in some instances creates an environment for openness and support, the anonymity also lends itself to cyberbullying. ‘Yik Yak’, a localised anonymous sharing app, facilitated cyberbullying in high schools to such a degree that the creators had to respond by geofencing schools so the app could not be accessed on the premises. The mask of anonymity creates an environment where individuals don’t have to own their behaviour and it can be kept entirely separate from their offline identity. There are no repercussions for behaviour, and no clear authority. In addition, users are distanced from seeing the offline reactions to their online behaviour, creating an illusion that the two worlds are separate.

Online anonymity permits freedom to express oneself with no offline repercussions

Online anonymity permits freedom to express oneself with no offline repercussions

For better or for worse, digital communication is here to stay. Constantly developing technology allows for ever-changing methods of interaction with close friends and strangers alike, with huge potential for both increasingly life-like digital interactions and more creative text-based communication – but are these developments necessary to enhance communicative empathy? The dichotomy between the virtues and vices of digitised anonymity tends to sway to one side or the other depending on the context in which anonymity is afforded. The market for increasing empathy in digital interactions is clearly expanding, with more and more social cues being made possible via digital links. The idea of the ‘virtual hug’ has been popular in internet chatrooms for decades; now technology is making this a reality, with devices such as the Kickstarter project ‘Frebble’, a handheld accessory which allows two users to ‘hold hands’ regardless of their physical distance, now on the scene. In these cases, decreased anonymity is the goal. However, it should not be forgotten that there are some situations in which anonymity works positively for the expression of empathy, facilitating deeper disclosure on sensitive topics, where digital disinhibition is critical.

The post From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy? appeared first on CEDE.

London Words

Screen Shot 2014-07-21 at 15.46.02

Above is a Wordle of the messages displayed on the big dot-matrix displays (aka variable message signs) that sit beside major roads in London, over the last couple of months. The larger the word, the more often it is shown on the screens.

The data comes from Transport for London via their Open Data Users platform, through CityDashboard‘s API. We now store some of the data behind CityDashboard, for London and some other cities, for future analysis into key words and numbers for urban informatics.

Below, as another Wordle, are the top words used in tweets from certain London-centric Twitter accounts – those from London-focused newspapers and media organisations, tourism organisations and key London commentators. Common English words (e.g. to, and) are removed. I’ve also removed “London”, “RT” and “amp”.

Screen Shot 2014-07-21 at 15.56.57

Some common words include: police, tickets, City, crash, Boris, Thames, Park, Festival, Bridge, bus, Kids.

Finally, here’s the notes that OpenStreetMap editors use when they commit changes to the open, user-created map of the world, for the London area:

Screen Shot 2014-07-21 at 16.10.50

Transport and buildings remain a major focus of the voluntary work on completing and maintaining London’s map, that contributors are carrying out.

There is no significance to the colours used in the graphics above. Wordle is a quick-and-dirty way to visualise data like this, we are looking at more sophisticated, and “fairer” methods, as part of ongoing research.

This work is preparatory work for the Big Data and Urban Informatics workshop in Chicago later this summer.

Thanks to Steve and the Big Data Toolkit, which was used in the collection of the Twitter data for CityDashboard.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien
Electric Tube
London North/South

The 2011 Area Classification for Output Areas

The 2011 Area Classification for Output Areas (2011 Output Area Classification or 2011 OAC) was released by the Office for National Statistics at 9.30am on the 18th July 2014.

Documentation, downloads and other information regarding the 2011 OAC are available from the official ONS webpage: http://www.ons.gov.uk/ons/guide-method/geography/products/area-classifications/ns-area-classifications/ns-2011-area-classifications/index.html.

Further information and a larger array of 2011 OAC resources can also be found at http://www.opengeodemographics.com.

Additionally, an interactive map of the 2011 OAC is available at http://public.cdrc.ac.uk.

For the 2011 release it has been agreed that a less centralised version of the OAC User Group will be beneficial. The new home of the OAC User Group is located at https://plus.google.com/u/0/communities/111157299976084744069 and enables a more decentralised way of organising meetings / events or accessing supporting materials. If you have any questions or comments regarding the classification then this is the place to visit.

This means areaclassification.org.uk will no longer be maintained. There will be no future posts and no upkeep of links or other materials currently available. Any bookmarks you have for this page should be redirected to http://www.opengeodemographics.com

Temporal OAC

As part of an ESRC Secondary Data Analysis Initiative grant Michail Pavlis, Paul Longley and I have been working on developing methods by which temporal patterns of geodemographic change can be modelled.

Much of this work has been focused on census based classifications, such as the 2001 Output Area Classification (OAC), and the 2011 OAC released today. We have been particularly interested in examining methods by which secondary data might be used to create measures enabling the screening of small areas over time as uncertainty builds as a result of residential structure change. The writeup of this work is currently out for review, however, we have placed the census based classification created for the years 2001 - 2011 on the new public.cdrc.ac.uk website, along with a change measure.

Some findings

  • 8 Clusters were found to be of greatest utility for the description of OA change between 2001 and 2011 and included
    • Cluster 1- "Suburban Diversity"
    • Cluster 2- "Ethnicity Central"
    • Cluster 3- "Intermediate Areas"
    • Cluster 4- "Students and Aspiring Professionals"
    • Cluster 5- "County Living and Retirement"
    • Cluster 6- "Blue-collar Suburbanites"
    • Cluster 7- "Professional Prosperity"
    • Cluster 8 – "Hard-up Households"

A map of the clusters in 2001 and 2011 for Leeds are as follows:

  • The changing cluster assignment between 2001 and 2011 reflected
    • Developing "Suburban Diversity"
    • Gentrification of central areas, leading to growing "Students and Aspiring Professionals"
    • Regional variations
      • "Ethnicity Central" more stable between 2001 and 2011 in the South East and London, than in the North West and North East, perhaps reflecting differing structural changes in central areas (e.g. gentrification)
      • "Hard-up Households” are more stable in the North West and North East than the South East or London; South East, and acutely so in London, flows were predominantly towards “Suburban Diversity”

Google’s 3D London gets better

We woke this morning to find Google has made some improvements to its 3D model of London in Google Earth. All the city’s buildings are now based on 45-degree aerial imagery, which should mean a marked improvement in accuracy of building shapes. So how much has it improved?

Firstly to compare the new Google London against an earlier version of itself, here are screenshots of the British Museum:

2010

British_Museum_GE_2010

2014

British_Museum_GE_2014

A mixed improvement. The computer game-style model of 2010 (I believe partly the product of crowdsourced individual 3D building models) is replaced by a continuous meshed surface. But as Apple found two years ago (embarrassingly) this method is prone to the inclusion of errors and artefacts – the BM’s roof is a big improvement, but its columns are now wonky and the complex shapes in the neighbouring rooftops are a bit messy. But we should recognised that this is an inevitable consequence of the shift to a more fully automated process – presumably the constraints on data size and processing power limit result in a trade-off between its resolution and accuracy. But to remedy this there seems to have been some manual correction to parts of the model – e.g. the London Eye looks touched up (despite some tree-coloured spokes):

London Eye

To compare the model with its main competitor, Apple Maps I’ve done a few screenshots, firstly

St Paul’s Cathedral

Google Earth

St_Pauls_GE

Apple Maps

St_Pauls_Apple

Google’s far superior St Paul’s again suggests manual correction or, possibly, their retention of the original model.

10 Downing Street

For anyone who hasn’t been there (author included) this is Mr Cameron’s back garden.

Google

No.10_GE

Apple

No.10_Apple

Apple have clearly done a better job on the official residence of the First Lord of the Treasury. The contrast and brightness make for a much clearer and realistic depiction, partly due to Apple’s higher resolution and partly because the time of day of Google’s survey meant more shadows.

Center Point

Chosen because Google are unlikely to have manicured a building site. As you can see Google still have some work to equal Apple’s resolution.

Google

Center_Point_GE

Apple

Center_Point_Apple

Buckingham Palace

Last but not least, the house of someone called Elizabeth Windsor who judging by Google’s model likes to have receptions in her expansive back garden.

Google

Palace_GE

Apple

Palace_Apple

Overall I think it’s fair to say a necessary improvement by Google but still very much work-in-progress. It is worth mentioning that Google provides a more move immersive environment (the interface lets the camera go lower and to angle horizontally) whereas Apple’s feel like a diorama (e.g. no sky), albeit one that interacts much more smoothly. And of course Google Earth is much more than just a 3D map. But given their better resolution and seeming clarity of imagery in my opinion Apple keeps the crown for best 3D model.

CASA at the Research Methods Festival 2014

As you can see from the image below, we spent three days at the NCRM Research Methods Festival in Oxford (#RMF14) last week.

RMF2014_1

In addition to our presentations in the “Researching the City” session on the Wednesday morning, we were also running a Smart Cities exhibition throughout the festival showcasing how the research has been used to create live visualisations of a city. This included the now famous “Pigeon Simulator”, which allows people to fly around London and is always very popular. The “About CASA” screen on the right of the picture above showed a continuous movie loop of some of CASA’s work.

RMF2014_3

The exhibition was certainly very busy during the coffee breaks and, as always at these types of events, we had some very interesting conversations with people about the exhibits. One discussion with a lawyer about issues around anonymisation of Big Datasets and how you can’t do it in practice made me think about the huge amount of information that we have access to and what we can do with it. Also, the Oculus Rift 3D headset was very popular and over the three days we answered a lot of questions from psychology researchers about the kinds of experiments you could do with this type of device. The interesting thing is that people trying out the Oculus Rift for the first time tended to fall into one of three categories: can’t see the 3D at all, see 3D but with limited effect, or very vivid 3D experience with loss of balance. Personally, I think it’s part psychology and part eye-sight.

Next time I must remember to take pictures when there are people around, but the sweets box got down to 2 inches from the bottom, so it seems to have been quite popular.

RMF2014_4

 

 

We had to get new Lego police cars for the London Riots Table (right), but the tactile nature of the Roving Eye exhibit (white table on the left) never fails to be popular. I’ve lost count of how many hours I’ve spent demonstrating this, but people always seem to go from “this is rubbish, pedestrians don’t behave like that”, through to “OK, now I get it, that’s really quite good”. The 3D printed houses also add an element of urban planning that wasn’t there when we used boxes wrapped in brown paper.

RMF2014_2

The iPad wall is shown on the left here with the London Data Table on the right. Both show a mix of real-time visualisation and archive animations. The “Bombs dropped during the Blitz” visualisation on the London Data Table which was created by Kate Jones (http://bombsight.org ) was very popular, as was the London Riots movie by Martin Austwick.

All in all, I think we had a fairly good footfall despite the sunshine, live Jazz band and wine reception.

 

 

 

Vespucci Institute on citizen science and VGI

The Vespucci initiative has been running for over a decade, bringing together participants from wide range of academic backgrounds and experiences to explore, in a ‘slow learning’ way, various aspects of geographic information science research. The Vespucci Summer Institutes are week long summer schools, most frequently held at Fiesole, a small town overlooking Florence. This year, the focus of the first summer institute was on crowdsourced geographic information and citizen science.

101_0083The workshop was supported by COST ENERGIC (a network that links researchers in the area of crowdsourced geographic information, funded by the EU research programme), the EU Joint Research Centre (JRC), Esri and our Extreme Citizen Science research group. The summer school included about 30 participants and facilitators that ranged from master students students that are about to start their PhD studies, to established professors who came to learn and share knowledge. This is a common feature of Vespucci Institute, and the funding from the COST network allowed more early career researchers to participate.

Apart from the pleasant surrounding, Vespucci Institutes are characterised by the relaxed, yet detailed discussions that can be carried over long lunches and coffee breaks, as well as team work in small groups on a task that each group present at the end of the week. Moreover, the programme is very flexible so changes and adaptation to the requests of the participants and responding to the general progression of the learning are part of the process.

This is the second time that I am participating in Vespucci Institutes as a facilitator, and in both cases it was clear that participants take the goals of the institute seriously, and make the most of the opportunities to learn about the topics that are explored, explore issues in depth with the facilitators, and work with their groups beyond the timetable.

101_0090The topics that were covered in the school were designed to provide an holistic overview of geographical crowdsourcing or citizen science projects, especially in the area where these two types of activities meet. This can be when a group of citizens want to collect and analyse data about local environmental concerns, or oceanographers want to work with divers to record water temperature, or when details that are emerging from social media are used to understand cultural differences in the understanding of border areas. These are all examples that were suggested by participants from projects that they are involved in. In addition, citizen participation in flood monitoring and water catchment management, sharing information about local food and exploring data quality of spatial information that can be used by wheelchair users also came up in the discussion. The crossover between the two areas provided a common ground for the participants to explore issues that are relevant to their research interests. 

2014-07-07 15.37.55The holistic aspect that was mentioned before was a major goal for the school – so to consider the tools that are used to collect information, engaging and working with the participants, managing the data that is provided by the participants and ensuring that it is useful for other purposes. To start the process, after introducing the topics of citizen science and volunteered geographic information (VGI), the participants learned about data collection activities, including noise mapping, OpenStreetMap contribution, bird watching and balloon and kite mapping. As can be expected, the balloon mapping raised a lot of interest and excitement, and this exercise in local mapping was linked to OpenStreetMap later in the week.

101_0061The experience with data collection provided the context for discussions about data management and interoperability and design aspects of citizen science applications, as well as more detailed presentations from the participants about their work and research interests. With all these details, the participants were ready to work on their group task: to suggest a research proposal in the area of VGI or Citizen Science. Each group of 5 participants explored the issues that they agreed on – 2 groups focused on a citizen science projects, another 2 focused on data management and sustainability and finally another group explored the area of perception mapping and more social science oriented project.

Some of the most interesting discussions were initiated at the request of the participants, such as the exploration of ethical aspects of crowdsourcing and citizen science. This is possible because of the flexibility in the programme.

Now that the institute is over, it is time to build on the connections that started during the wonderful week in Fiesole, and see how the network of Vespucci alumni develop the ideas that emerged this week.

 


From Oculus Rift to Facebook: finding money and data in the crowd – Times Higher Education


Times Higher Education

From Oculus Rift to Facebook: finding money and data in the crowd
Times Higher Education
Crowdsourcing could revolutionise the way scholarly research is funded and conducted over the next few years, an academic has suggested. Andy Hudson-Smith, director and deputy chair of the Bartlett Centre for Advanced Spatial Analysis at University ...

New Paper: Assessing the impact of demographic characteristics on spatial error in VGI features

LISA analysis of positional accuracy for the OSM  data set
Building upon our interest in volunteered geographic information (VGI) and extending our previous paper  "Assessing Completeness and Spatial Error of Features in Volunteered Geographic Information" we have just published the paper with the rather long title "Assessing the impact of demographic characteristics on spatial error in volunteered geographic information features" where we explore how demographics impact on the quality of VGI data

Below is the abstract of the paper: 
The proliferation of volunteered geographic information (VGI), such as OpenStreetMap (OSM) enabled by technological advancements, has led to large volumes of user-generated geographical content. While this data is becoming widely used, the understanding of the quality characteristics of such data is still largely unexplored. An open research question is the relationship between demographic indicators and VGI quality. While earlier studies have suggested a potential relationship between VGI quality and population density or socio-economic characteristics of an area, such relationships have not been rigorously explored, and mainly remained qualitative in nature. This paper addresses this gap by quantifying the relationship between demographic properties of a given area and the quality of VGI contributions. We study specifically the demographic characteristics of the mapped area and its relation to two dimensions of spatial data quality, namely positional accuracy and completeness of the corresponding VGI contributions with respect to OSM using the Denver (Colorado, US) area as a case study. We use non-spatial and spatial analysis techniques to identify potential associations among demographics data and the distribution of positional and completeness errors found within VGI data. Generally, the results of our study show a lack of statistically significant support for the assumption that demographic properties affect the positional accuracy or completeness of VGI. While this research is focused on a specific area, our results showcase the complex nature of the relationship between VGI quality and demographics, and highlights the need for a better understanding of it. By doing so, we add to the debate of how demographics impact on the quality of VGI data and lays the foundation to further work.

The analysis workflow
Full Reference:
Mullen W., Jackson, S. P., Croitoru, A., Crooks, A. T., Stefanidis, A. and Agouris, P., (2014), Assessing the Impact of Demographic Characteristics on Spatial Error in Volunteered Geographic Information Features, GeoJournal. DOI: 10.1007/s10708-014-9564-8

New Paper: Assessing the impact of demographic characteristics on spatial error in VGI features

LISA analysis of positional accuracy for the OSM  data set
Building upon our interest in volunteered geographic information (VGI) and extending our previous paper  "Assessing Completeness and Spatial Error of Features in Volunteered Geographic Information" we have just published the paper with the rather long title "Assessing the impact of demographic characteristics on spatial error in volunteered geographic information features" where we explore how demographics impact on the quality of VGI data

Below is the abstract of the paper: 
The proliferation of volunteered geographic information (VGI), such as OpenStreetMap (OSM) enabled by technological advancements, has led to large volumes of user-generated geographical content. While this data is becoming widely used, the understanding of the quality characteristics of such data is still largely unexplored. An open research question is the relationship between demographic indicators and VGI quality. While earlier studies have suggested a potential relationship between VGI quality and population density or socio-economic characteristics of an area, such relationships have not been rigorously explored, and mainly remained qualitative in nature. This paper addresses this gap by quantifying the relationship between demographic properties of a given area and the quality of VGI contributions. We study specifically the demographic characteristics of the mapped area and its relation to two dimensions of spatial data quality, namely positional accuracy and completeness of the corresponding VGI contributions with respect to OSM using the Denver (Colorado, US) area as a case study. We use non-spatial and spatial analysis techniques to identify potential associations among demographics data and the distribution of positional and completeness errors found within VGI data. Generally, the results of our study show a lack of statistically significant support for the assumption that demographic properties affect the positional accuracy or completeness of VGI. While this research is focused on a specific area, our results showcase the complex nature of the relationship between VGI quality and demographics, and highlights the need for a better understanding of it. By doing so, we add to the debate of how demographics impact on the quality of VGI data and lays the foundation to further work.

The analysis workflow
Full Reference:
Mullen W., Jackson, S. P., Croitoru, A., Crooks, A. T., Stefanidis, A. and Agouris, P., (2014), Assessing the Impact of Demographic Characteristics on Spatial Error in Volunteered Geographic Information Features, GeoJournal. DOI: 10.1007/s10708-014-9564-8

New Paper: Assessing the impact of demographic characteristics on spatial error in VGI features

LISA analysis of positional accuracy for the OSM  data set
Building upon our interest in volunteered geographic information (VGI) and extending our previous paper  "Assessing Completeness and Spatial Error of Features in Volunteered Geographic Information" we have just published the paper with the rather long title "Assessing the impact of demographic characteristics on spatial error in volunteered geographic information features" where we explore how demographics impact on the quality of VGI data

Below is the abstract of the paper: 
The proliferation of volunteered geographic information (VGI), such as OpenStreetMap (OSM) enabled by technological advancements, has led to large volumes of user-generated geographical content. While this data is becoming widely used, the understanding of the quality characteristics of such data is still largely unexplored. An open research question is the relationship between demographic indicators and VGI quality. While earlier studies have suggested a potential relationship between VGI quality and population density or socio-economic characteristics of an area, such relationships have not been rigorously explored, and mainly remained qualitative in nature. This paper addresses this gap by quantifying the relationship between demographic properties of a given area and the quality of VGI contributions. We study specifically the demographic characteristics of the mapped area and its relation to two dimensions of spatial data quality, namely positional accuracy and completeness of the corresponding VGI contributions with respect to OSM using the Denver (Colorado, US) area as a case study. We use non-spatial and spatial analysis techniques to identify potential associations among demographics data and the distribution of positional and completeness errors found within VGI data. Generally, the results of our study show a lack of statistically significant support for the assumption that demographic properties affect the positional accuracy or completeness of VGI. While this research is focused on a specific area, our results showcase the complex nature of the relationship between VGI quality and demographics, and highlights the need for a better understanding of it. By doing so, we add to the debate of how demographics impact on the quality of VGI data and lays the foundation to further work.

The analysis workflow
Full Reference:
Mullen W., Jackson, S. P., Croitoru, A., Crooks, A. T., Stefanidis, A. and Agouris, P., (2014), Assessing the Impact of Demographic Characteristics on Spatial Error in Volunteered Geographic Information Features, GeoJournal. DOI: 10.1007/s10708-014-9564-8

Research Fellow Post at LSHTM

We are seeking to appoint a Research Fellow to work on an exciting project as part of a randomised controlled trial investigating the impact of living in the East Village (a neighbourhood based on active design principles in the Queen Elizabeth II Olympic Park) on physical activity and health.

The post is full-time for two years. The post will be based in the Healthy Environments Research Programme in the Department of Social and Environmental Health Research at the London School of Hygiene and Tropical Medicine. The post will suit a candidate with a strong background in social or environmental epidemiology, spatial analysis and/or quantitative health geography, especially in the field of neighbourhood built and social determinants of health. A higher degree (ideally PhD) in a relevant field is essential. Skills in quantitative data analysis using longitudinal and/or spatial approaches as well as some expertise in the using of GIS are desirable. The successful candidate will be required to collate, create and analyse secondary data on environmental exposures related to physical activity and other health behaviours and write up findings for peer-reviewed publication. The post is supervised by Professor Steven Cummins (steven.cummins@lshtm.ac.uk) and Dr Daniel Lewis (daniel.lewis@lshtm.ac.uk).

Closing date: 27th July 2014

https://jobs.lshtm.ac.uk/Vacancy.aspx?ref=HERP01

Interactive map will show you how religious your neighbours are – Milton Keynes MKWeb


Interactive map will show you how religious your neighbours are
Milton Keynes MKWeb
The map was designed at UCL's Centre for Advanced Spatial Analysis by Oliver O'Brien, and goes by the name of DataShine. Recording religion - the most religious areas are shown in red - is far from the only thing this map can show, it also shows ...

and more »

Interactive map will show you how religious your neighbours are – Northampton Herald and Post


Interactive map will show you how religious your neighbours are
Northampton Herald and Post
A NEW interactive map will show you just how religious your neighbours are - and that's not all that it can do. The map was designed at UCL's Centre for Advanced Spatial Analysis by Oliver O'Brien, and goes by the name of DataShine. Recording religion ...

and more »

Interactive map will show you how religious your neighbours are – Bedfordshire News


Interactive map will show you how religious your neighbours are
Bedfordshire News
A NEW interactive map will show you just how religious your neighbours are - and that's not all that it can do. The map was designed at UCL's Centre for Advanced Spatial Analysis by Oliver O'Brien, and goes by the name of DataShine. Recording religion ...

and more »

Interactive map will show you how religious your neighbours are – Luton On Sunday


Interactive map will show you how religious your neighbours are
Luton On Sunday
A NEW interactive map will show you just how religious your neighbours are - and that's not all that it can do. The map was designed at UCL's Centre for Advanced Spatial Analysis by Oliver O'Brien, and goes by the name of DataShine. Recording religion ...

and more »

How religious are your neighbours? New interactive map of Britain can tell you… – Metro


Metro

How religious are your neighbours? New interactive map of Britain can tell you…
Metro
The map, called DataShine, was created by Oliver O'Brien at UCL's Centre for Advanced Spatial Analysis and shows, among other things, the atheist hot-spots around the UK. Pulling data from a survey, that map was constructed to show areas where at least ...
How religious are your neighbours? New interactive map could tell youDerby Telegraph

all 2 news articles »

How religious are your neighbours? New interactive map could tell you – Derby Telegraph


Derby Telegraph

How religious are your neighbours? New interactive map could tell you
Derby Telegraph
A NEW map has been released which shows online visitors how religious different parts of the country are. Called DataShine, the map was created at University College London's Centre for Advanced Spatial Analysis and shows the atheist and religious ...

and more »

La UPM apuesta por los referentes en el diseño de la ciudad del futuro – elEconomista.es


La UPM apuesta por los referentes en el diseño de la ciudad del futuro
elEconomista.es
La presentación de la iniciativa contó con la intervención de Michale Batty, director del Centre for Advanced Spatial Analysis (CASA) del University College de Londres y autor entre otros de New Science of Cities, su última obra. Batty subrayó la ...

Ciudades del futuro: La UPM aborda el ecosistema urbano desde el principio – iAgua.es


Ciudades del futuro: La UPM aborda el ecosistema urbano desde el principio
iAgua.es
La presentación de la iniciativa contó con la intervención de Michale Batty, director del Centre for Advanced Spatial Analysis (CASA) del University College de Londres y autor entre otros de New Science of Cities, su última obra. Batty subrayó la ...

and more »

The latest outputs from researchers, alumni and friends at UCL CASA