Researcher in Geography of Health/Medical Geography and GIS

GeoHealth Laboratory, University of Canterbury, Christchurch, New Zealand

Post based in Wellington

 

Researcher in Health Geography, GeoHealth Laboratory, based in Wellington

This is a 2.5 year fixed term position. The successful applicant will have interests in the following areas; neighbourhoods and health, environmental justice and health, environment and health, impacts of urban environment on health, transport and health, health inequalities and/or GIS and health. Closing date 25th Sept 2014. Ref 2335.

 

http://www.jobs.ac.uk/job/AJL862/research-analyst-spatial-analysis/

http://www.seek.co.nz/job/27137079

 

For more detailed information and to apply online visit. https://ucvacancies.canterbury.ac.nz/psp/ps/EMPLOYEE/HRMS/c/HRS_HRAM.HRS_CE.GBL

 

Enquiries of an academic nature can be made to the GeoHealth Lab Directors; Professor Simon Kingham, simon.kingham@canterbury.ac.nz tel +64 3 364 2893 or Dr Malcolm Campbell, Malcolm.campbell@canterbury.ac.nz tel +64 3 364 2987 x 7908.

 

 

About the GeoHealth Laboratory

The Department of Geography, University of Canterbury has a high profile in the field of health/medical geography. A joint venture between the University of Canterbury and the New Zealand Ministry of Health led to the establishment of the GeoHealth Laboratory. The Lab is based in a well equipped and specifically designated facility on the UC campus in Christchurch and has research interests particularly in and health, health inequalities and/or GIS and health. . For further information see: http://www.geohealth.canterbury.ac.nz/.

This email may be confidential and subject to legal privilege, it maynot reflect the views of the University of Canterbury, and it is notguaranteed to be virus free. If you are not an intended recipient,please notify the sender immediately and erase all copies of the messageand any attachments. Please refer to http://www.canterbury.ac.nz/emaildisclaimer for moreinformation.

Try this 3D rollercoaster for the Oculus Rift headset. Could it help plan cities? – The Guardian


The Guardian

Try this 3D rollercoaster for the Oculus Rift headset. Could it help plan cities?
The Guardian
The Bartlett Centre for Advanced Spatial Analysis (Casa) at UCL in London, where Dawkins studies, had set out to show how a virtual rollercoaster could demonstrate to the Grand Designs Live exhibition's 100,000 visitors a more tangible example of how ...

and more »

3D Buildings

I’ve been experimenting with 3D buildings in my virtual globe project and it’s now progressed to the point where I can demonstrate it working with dynamically loaded content. The following YouTube clip shows the buildings for London, along with the Thames. I didn’t have the real heights, so buildings are extruded up by a random amount and, unfortunately, so is the river, which is why it looks a bit strange. The jumps as it zooms in and out is me flicking the mouse wheel button, but the YouTube upload seems to make this and the picture quality a lot worse than the original:

Performance is an interesting thing as these movies were all made on my home computer rather than my iMac in the office. The green numbers show the frame rate. My machine is a lot faster as it’s using a Crucial SSD disk, so the dynamic loading of the GeoJSON files containing the buildings is fast enough to run in real time. The threading and asynchronous loading of tiles hasn’t been completed yet, so, when new tiles are loaded, the rendering stalls briefly.

On demand loading of building tiles is a big step up from using a static scene graph. The way this works is to calculate the ground point that the current view is over and render a square of nine tiles centred on the viewer’s ground location. Calculation of latitude, longitude and height from 3D Cartesian coordinates is an interesting problem that ends up having to use the Newton-Raphson approach. This still needs some work as it’s obvious from the movies that not enough content ahead of the viewer is being drawn. As the view moves around, the 3×3 grid of ground tiles is shuffled around and any new tiles that are required are loaded into the cache.

Working on the principle that tiles are going to be loaded from a server, I’ve had to implement a data cache based on the file URI (just like MapTubeD does). When tiles are requested, the GeoJSON files are moved into the local cache, loaded into memory, parsed, triangulated using Poly2Tri, extruded and converted into a 3D mesh. Based on how long the GeoJSON loading is taking on my iMac, a better solution is to pre-compute the 3D geometry to take the load off of the display software. At the moment I’m using a Java program I created to make vector tiles (GeoJSON) out of a shapefile for the southeast of England. I’ve assumed the world to be square (-180,-180 to 180,180 degrees) then cut the tiles using a quadtree system so that they’re square in WGS84. Although this gives me a resolution problem and non-square 3D tiles, it works well for testing. The next step is to pre-compute the 3D content and thread the data loading so it works at full speed.

Finally, this was just something fun I did as another test. Earth isn’t the only planet, other planets are available (and you can download the terrain maps)…

 

Particles – 3dsMax and Lumion/Unity

Particle Flow is a versatile, powerful particle system for Autodesk’s 3ds Max. It employs an event-driven model, using a special dialog called Particle View, allowing you to combine individual operators that describe particle properties such as shape, speed, direction, and rotation over a period of time into groups called events. Each operator provides a set of parameters, many of which you can animate to change particle behaviour during the event. As the event transpires, Particle Flow continually evaluates each operator in the list and updates the particle system accordingly.

pFlow 3ds Max

pFlow 3ds Max

To achieve more substantial changes in particle properties and behaviour, you can create a flow. The flow sends particles from event to event using tests, which let you wire events together in series. A test can check, for example, whether a particle has passed a certain age, how fast it’s moving, or whether it has collided with a deflector. Particles that pass the test move on to the next event, while those that don’t meet the test criteria remain in the current event, possibly to undergo other tests. The simple example pictured above details a pFlow dialogue determining the birth of particles linked to a target geometry. The particles can subsequently be baked (using pFlow Baker) into an animation timeline for simple output via .fbx, allowing import into external systems such as Unity or Lumion.

The clip above illustrates the pFlow system imported into Lumion with the addition of a scene created in CityEngine.

(Visited 9 times, 9 visits today)

BSPS day meeting on the ‘usual residence’ concept and alternative population bases, LSE, 24 October 2014

This forthcomingmeeting will be held at LSE on Friday 24th October 2014, 10.30am-5.00pm. The main question to be addressed is: ‘Is the concept of ‘usual residence’ reaching its sell-by date?’ Now that the Government has confirmed that a further Population Census will take place in 2021, it is an opportune time to consider how far the use of alternative population bases should be expanded at the expense of statistics based on usual residence.

 

The meeting is proposed primarily as a scoping exercise. Speakers will introduce the issues, present the results of work on the 2011 census data on alternative population bases and report on the latest thinking at the UN for the 2020 round of censuses. The meeting also provides a forum for representatives of a variety of sectors to express their views on the relative value of the alternatives in the light of societal change. Already on board are Richard Potter, ONS, Ian White, Ludi Simpson and Tony Champion.

 

An afternoon session has been allocated forany others to provide evidence and views on the relative importance of the various population bases in their operations. you If would like to be considered for presenting such evidence to the meeting either as a formal paper or in panel discussion, please email tony.champion@ncl.ac.uk by Friday 12th September. The full programme for the day will be available shortly afterwards.

 

Registration is now open and will be on a first-come, first-served basis. Please register by emailing pic@lse.ac.uk or phoning the BSPS Secretariat on 020 7955 7666. There is no charge for this meeting and it is open to members and non-members. Details of & directions to the meeting room will be sent on later.

Geographies of Co-Production: highlights of the RGS/IBG ’14 conference

The 3 days of the Royal Geographical Society (with IBG) or RGS/IBG  annual conference are always valuable, as they provide an opportunity to catch up with the current themes in (mostly human) Geography. While I spend most of my time in an engineering department, I also like to keep my ‘geographer identity’ up to date as this is the discipline that I feel most affiliated with. 

Since last year’s announcement that the conference will focus on ‘Geographies of Co-Production‘ I was looking forward to it, as this topic relate many themes of my research work. Indeed, the conference was excellent – from the opening session to the last one that I attended (a discussion about the co-production of co-production).

Just before the conference, the participatory geographies research group run a training day, in which I run a workshop on participatory mapping. It was good to see the range of people that came to the workshop, many of them in early stages of their research career who want to use participatory methods in their research.

In the opening session on Tuesday’s night, Uma Kothari raised a very important point about the risk of institutions blaming the participants if a solution that was developed with them failed. There is a need to ensure that bodies like the World Bank or other funders don’t escape their responsibilities and support as a result of participatory approaches. Another excellent discussion came from Keri Facer who analysed the difficulties of interdisciplinary research based on her experience from the ‘connected communities‘ project. Noticing and negotiating the multiple dimensions of differences between research teams is critical for the co-production of knowledge.

By the end of this session, and as was demonstrated throughout the conference, it became clear that there are many different notions of ‘co-production of knowledge’ – sometime it is about two researchers working together, for others it is about working with policy makers or civil servants, and yet for another group it means to have an inclusive knowledge production with all people that can be impacted by a policy or research recommendation. Moreover, there was even a tension between the type of inclusiveness – should it be based on simple openness (‘if you want to participate, join’), or representation of people within the group, or should it be a active effort for inclusiveness? The fuzziness of the concept proved to be very useful as it led to many discussions about ‘what co-production means?’, as well as ‘what co-production does?’. 

Two GIS education sessions were very good (see Patrick’s summery on the ExCiteS blog) and I found Nick Tate and Claire Jarvis discussion about the potential of virtual community of practice (CoP) for GIScience professionals especially interesting. An open question that was left at the end of the session was about the value of generic expertise (GIScience) or the way they are used in a specific area. In other words, do we need a CoP to share the way we use the tools and methods or is it about situated knowledge within a specific domain? 

ECR panel (source: Keri Facer)

ECR panel (source: Keri Facer)

The Chair Early Career panel was, for me, the best session in the conferenceMaria Escobar-TelloNaomi Millner, Hilary Geoghegan and Saffron O’Neil discussed their experience in working with policy makers, participants, communities and universities. Maria explored the enjoyment of working at the speed of policy making in DEFRA, which also bring with it major challenges in formulating and doing research. Naomi discussed productive margins project which involved redesigning community engagement, and also noted what looks like very interesting reading: the e-book Problems of Participation: Reflections on Authority, Democracy, and the Struggle for Common Life. Hilary demonstrated how she has integrated her enthusiasm for enthusiasm into her work, while showing how knowledge is co-produced at the boundaries between amateurs and professionals, citizens and scientists. Hilary recommended another important resource – the review Towards co-production in research with communities (especially the diagram/table on page 9). Saffron completed the session with her work on climate change adaptation, and the co-production of knowledge with scientists and communities. Her research on community based climate change visualisation is noteworthy, and suggest ways of engaging people through photos that they take around their homes.

In another session which focused on mapping, the Connected Communities project appeared again, in the work of Chris Speed, Michelle Bastian & Alex Hale on participatory local food mapping in Liverpool and the lovely website that resulted from their project, Memories of Mr Seel’s Garden. It is interesting to see how methods travel across disciplines and to reflect what insights should be integrated in future work (while also resisting a feeling of ‘this is naive, you should have done this or that’!). 

On the last day of the conference, the sessions on ‘the co-production of data based living‘ included lots to contemplate on. Rob Kitchin discussion and critique of smart-cities dashboards, highlighting that data is not-neutral, and that it is sometime used to decontextualised the city from its history and exclude non-quantified and sensed forms of knowledge (his new book ‘the data revolution’ is just out). Agnieszka Leszczynski continued to develop her exploration of the mediation qualities of techno-social-spatial interfaces leading to the experience of being at a place intermingled with the experience of the data that you consume and produce in it. Matt Wilson drawn parallel between the quantified self and the quantified city, suggesting the concept of ‘self-city-nation’ and the tensions between statements of collaboration and sharing within proprietary commercial systems that aim at extracting profit from these actions. Also interesting was Ewa Luger discussion of the meaning of ‘consent’ within the Internet of Things project ‘Hub of All Things‘ and the degree in which it is ignored by technology designers. 

The highlight of the last day for me was the presentation by Rebecca Lave on Critical Physical Geography‘. This is the idea that it is necessary to combine scientific understanding of hydrology and ecology with social theory. It is also useful in alerting geographers who are dealing with human geography to understand the physical conditions that influence life in specific places. This approach encourage people who are involved in research to ask questions about knowledge production, for example social justice aspects in access to models when corporations can have access to weather or flood models that are superior to what is available to the rest of society.

Overall, Wendy Larner decision to focus the conference on co-production of knowledge was timely and created a fantastic conference. Best to complete this post with her statement on the RGS website:

The co-production of knowledge isn’t entirely new and Wendy is quick to point out that themes like citizen science and participatory methods are well established within geography. “What we are now seeing is a sustained move towards the co-production of knowledge across our entire discipline.” 

 


Even more fishing about architecture

GPS bike tracks visualisation in Madrid (work in progress) - Martin Zaltz Austwick and Gustavo Romanillous

GPS bike tracks visualisation in Madrid (work in progress) – Martin Zaltz Austwick and Gustavo Romanillous

I’ve written in the past (and here) about how difficult I’ve found it to get my hands on a book about visualisation that’s a real page turner – this must be a problem that greater minds than I have struggled with for at least a century or two, but I find the writing about design I see curiously unsatisfying. Like travel writing, I’d rather be doing the thing than reading about it. Which is as fine a way as any to learn, but a bit solipsistic.

Most* visualisation books seem to place themselves between the two camps of Very Shiny Art and Rather Matte Science**. The former category offers portfolios of visualisation work – inspiring, interesting, educational, but rarely something you read from cover to cover. In the science camp, writers might bring in more technical content related to visual perception and theory of design. As science, I don’t usually find these revelations very exciting, and I can’t always understand the way different insights are applied across the design world. I mean to say that information designers are frequently not all that scientific, and their work is often the better for it.

Some books manage this tension better than others. Alberto Cairo’s The Functional Art is a good example, and by focussing on journalism, provides a specific lens. Last year, Isabelle Mereilles Design For Information covered similar ground, but in greater depth and without the framing of journalistic storytelling. The book deals with both general principles of design and visual perception, treating these in some detail; and drills into lots of specific examples, explaining the implementation and function of interactive a well as static visualisations. One of the useful concepts for relative design beginners like me is that of preattention - what information your brain gets from the visual impression before the bits of your brain that “do” language and numbers and reasoning read the key and the colour scale and the legend. It’s an imperfect analogy, but I think about it as dumping some of the processing onto the Graphics Card and making life a bit easier for the CPU. Of course, getting a broad snapshot overview before our more conscious processes kick in is perfectly attuned to the paradigm of visualisation – where we try to discern broad patterns before drilling down and exploring the nuances of the data.

Joel Katz’s Designing Information is a more general book on information design – so as well as covering cartography and datavis, it talks about more everyday aspects of the subject such as pedestrian signage, anatomical diagrams and questionnaire design. Katz’s style is light and irreverent at times, and he’s not afraid to offer real critique of some of the more prominent and iconic pieces in the field, whether it’s a hand-drawn map of the USA or Minard‘s now-ubiquitous Napoleonic March. Of the latter, he says a map that takes fifteen minute to understand is no better than reading the 1,000 words it purports to paint –  a provocative statement that nevertheless encourages the reader to be critical of visuals which are data-rich at the expense of the preattentive advantage.

This, and the tension between art and science in data visualisation, seem to me to be the central tensions of digital visualisation, and – for all I know, and I’m no designer – design in general. For example: Science tells us that people aren’t necessarily very good at discriminating continuous colour scales, so when cartographers make maps they will often bin data into discrete categories, and if they use a binning method like Jenks’, these categories will be organised so that entities having a similar colour corresponds to them belonging to the same group or cluster of data, i.e. not over equal intervals. Here, the data is undergoing a cluster analysis without the viewer ever knowing, in order to be more readily understandable by the viewer. That seems pretty weird to me, but ok – let’s say we’re more interested in the way people understand the data than the “accuracy” of the way we represent the data – there’s no point in being accurate if people misperceive the data.

Let’s take another scenario, though: you’re creating a map and using circles to represent the magnitude of some quantity at some location. Science tells us that people are bad at assessing area but good at assessing length. So what if we use the radius of the circle to represent the number, rather than the area? But that barely ever happens, because it’s considered that people will assume that area represents the quantity, even as they’re busy perceiving it badly – this would be said by most designers to be highly misleading, emphasising larger circles unduly. So here “accuracy” or “fidelity” take precedence over perceptibility or at least comparability.

Others may not perceive this as the contradiction I do; perhaps these examples can be seen as different sides of the same discussion of what we represent as datavis people and what the viewer emphasizes (whether we, or they, like it or not). And there are certainly people using rainbow scales and pie charts and everything Uncle Tufte said nice boys and girls don’t do. I suspect that for almost any cast iron “to do” or “don’t do”, there will be numerous counterexamples of talented designers breaking these rules and creating good work. It’s difficult to be scientific about what is good art – and I can’t help feeling the same about information design, despite (or perhaps because of) its amenability to the science of visual perception.

—–

*I’m going to leave aside the third category, the code cookbook, which is invaluable but often doesn’t incorporate general principle, and can date rapidly.

*Matt Science is my stage name

——-

Cairo, A., 2013. The functional art: an introduction to information graphics and visualization
 

Katz, J., 2012. Designing Information: Human Factors and Common Sense in Information Design 1 edition., Hoboken, New Jersey: John Wiley & Sons.

 
Meirelles, I., 2013. Design for Information: An Introduction to the Histories, Theories, and Best Practices Behind Effective Information Visualizations, Rockport.
 

 


Herding Sheep

http://www.bbc.com/news/science-environment-28936251
Have you ever wondered how a farmer and a single sheep dog can herd sheep? A recent paper in Journal of the Royal Society Interface explains just how. Using GPS data from collars researchers have developed a computational model which  "reproduces key features of empirical data collected from sheep–dog interaction". The model has two simple rules:
"The first rule: The sheepdog learns how to make the sheep come together in a flock. The second rule: Whenever the sheep are in a tightly knit group, the dog pushes them forwards." (BBC)
The movie below (which accompanies the paper) shows some simulation runs with different numbers of agents. The shepherd (blue) approaches and rounds up the agents/sheep (black dots) and then proceed to herd the group toward the target.



Full Reference:
Strömbom. D. Mann, R. P., Wilson, A. M., Hailes, S., Morton, A. J., Sumpter, D. J. T., King, A. J. (2014) Solving the shepherding problem: Heuristics for herding autonomous, interacting agents. Journal of the Royal Society Interface, 11: 20140719.
Thanks to @Badnetworker for drawing my attention to this.

The Making of the CASA Oculus Rift Urban Roller Coaster

Earlier this year CASA was invited to create a virtual reality exhibit for the Walking on Water exhibition, partnered with Grand Designs Live at London’s ExCeL. While CASA has a tendency to spend a lot of time thinking seriously about cities and data it was quickly decided that a fun and novel way to engage the 100,000 or so expected visitors would be an urban roller coaster ride using the Oculus Rift Virtual Reality headset. Oliver Dawkins, a student on our MRes in Advanced Spatial Analysis is a leading light in urban visualisation and the Oculus Rift, as such he kindly offered to lead the development. In the following guest post, Oliver (of http://virtualarchitectures.wordpress.com/) talks us through the development process….

CASA Urban Roller Coaster from Virtual Architectures on Vimeo.
The first tool chosen for this project was the Unity game engine because it provides a very simple means of integrating the Oculus Rift virtual reality headset into a real-time 3D experience. Initial tests were made in Unity with a pre-made roller coaster model downloaded from the Unity Asset Store. However, rather than simply place that roller coaster in an urban setting I wanted to create a track that would be unique to this experience and feel like it might have been part of the urban infrastructure. Due to time constraints it was not possible to model the urban scene from scratch. Instead I decided to generate it procedurally in Autodesk 3ds MAX using a great free script called ghostTown Lite.

Although I like to use SketchUp for 3D modelling wherever possible 3ds MAX was much better suited to this project as it allowed me to quickly generate the city scene, model the roller coaster track, and animate the path of the ride, all in the one software package. After generating the urban scene I used the car from the Asset Store roller coaster as a guide for modelling my track in the correct proportions.

making_of_casa_roller_coaster_02

The path of the ride through the city was modeled using Bezier splines, first in the Top view to get the rough layout and then in the Front and Left views to ensure the path would clear the buildings in my scene. The experience needed to be comfortable to users who may not have experienced virtual reality before so it was agreed to exclude loop-the-loops on this occasion. It was also important to avoid bends that would be too sharp for roller coaster to realistically follow. Once I was happy with the path I welded all the vertices in my splines so that the path could be used to animate the movement of the roller coaster car along the track later.

making_of_casa_roller_coaster_03

Next sections of track were added to the path I’d created using the 3ds MAX PathDeform (WSM) modifier. As the name suggests this modifier deforms selected geometry to follow a chosen path. Using this modifier massively simplified the process by allowing my pre-made sections of track to be offset along the length of the path and then stretched, rotated and twisted to fit together as seamlessly as possible. This was the most intricate and time consuming part of the project.

making_of_casa_roller_coaster_04

In order to minimise the the potential for motion sickness with the Oculus Rift I was careful to keep the rotation of the track as close to the horizontal plane as possible. Supporting struts were then arrayed along the path of the track and positioned in order to anchor it to the rest of the scene. When I was satisfied a ‘Snapshot’ was made of the geometry in 3ds MAX to create a single mesh ready for export to Unity. At this point the path deformed sections of track could be deleted as Unity does not recognise the modifier.

making_of_casa_roller_coaster_05

To create the movement of the roller coaster car along the track a 3ds MAX dummy helper was constrained to the path I’d created earlier. This generated starting and ending key frames on the animation timeline. The roller coaster car model was then placed on the track and linked to the dummy helper. It is possible in 3ds Max to have the velocity and banking of the dummy calculated automatically, but I found that this did not give a realistic feel. Instead I controlled both by editing the animation key frames using a camera linked to the dummy for reference. This was time intensive but gave a better result. The city scene, roller coaster and animation were exported as a single FBX which is the preferred import format for 3D geometry in Unity.

making_of_casa_roller_coaster_06

Having completed the track and animated the car it was time to assemble the final scene in Unity. First I generated a terrain using a great plugin called World Composer. This enables you to import satellite imagery and terrain heights from Bing maps to give your backdrops a high degree of realism.

making_of_casa_roller_coaster_07

The urban scene and roller coaster were then imported and a skybox and directional light were added. The scene was completed with various assets from the Unity Asset Store including skyscrapers, roof objects, vehicles, idling characters and a flock of birds.

making_of_casa_roller_coaster_09

To prepare the Oculus Rift integration the OVR camera controller asset from Oculus was placed inside and parented to the roller coaster car. In my initial tests with the Asset Store roller coaster I’d found that OVR camera would drift from the forward facing position. This would disorientate the user and contribute to motion sickness. To prevent it with a quick fix I parented a cube to the front of the roller coaster car, turned off rendering of the cube so it would be invisible, and set the camera controller to follow the cube.

making_of_casa_roller_coaster_10

In order to ensure the best possible virtual experience it is really important to keep the rendered frames per second as high as possible. As the Oculus Rift renders two cameras simultaneously, one for each eye, you need to aim to render 60 fps in Unity so as to ensure the user can expect to experience a frame rate of 30 fps.

CASA Urban Roller Coaster

In order to achieve this I took advantage of occlusion culling in Unity Pro which prevents objects being rendered when they are outside the camera’s field of view or obscured by other objects.

making_of_casa_roller_coaster_11

I also baked the shadows for all static objects in the scene to save them as textures which saves the processor calculating them dynamically. The only objects casting dynamic shadows are the roller coaster car and animated characters.

Finally two simple java scripts were added. The first would start the roller coaster and uclprovost3play a roller coaster sound file upon pressing the ‘S’ key. The second closed the roller coaster application upon pressing the ‘Esc’ key.

The reception of the CASA Urban Roller Coaster ride at Grand Designs Live was fantastic and I’m really pleased to have participated. It was a great project to work on and an excellent opportunity to learn new techniques in 3ds MAX and Unity. Having my first VR roller coaster under my belt I’m looking forward to building another truly terrifying one when I get the time, hopefully for the Oculus Rift DK2 which has just arrived at CASA.

On a last note I’d like to thank Tom Hoffman of Lake Earie Digital whose excellent YouTube tutorials on creating roller coasters in 3ds MAX provided a great guide through the most difficult part of this challenging project.

You can follow Oliver’s latest work at  http://virtualarchitectures.wordpress.com/…..

(Visited 35 times, 35 visits today)

Opportunities at St Andrews for Population Researchers

Three positions in Human Geography – ML1317
Details:

The Department of Geography and Sustainable Development at St Andrews invites
applications for three posts in Human Geography (from Lecturer to Professor). Exceptional
candidates will be considered for Reader or Professorial positions. We welcome applications
from candidates at all career stages who are, or have the potential to be, world leading in their
particular specialism.

The successful candidates may have expertise in any area of human geography. Our desire is
to appoint individuals with outstanding research capacity whatever their specialism, although
expertise in population/health or cities/neighbourhoods may be an advantage. Experience of
advanced quantitative methods is desired for at least one of the posts. You will have the
opportunity to engage with staff working on large, externally-funded initiatives – the Centre
for Population Change (http://www.cpc.ac.uk/) and the Census and Administrative Data
Longitudinal Studies hub (http://calls.ac.uk/), whilst those with interests in cities and
neighbourhoods will be encouraged to develop links with the Centre for Housing Research
(http://www.st-andrews.ac.uk/chr) within the Department. You will also contribute to the
Geography teaching programme as appropriate.

Informal enquiries: you are welcome to discuss any of the posts informally with Prof Allan
Findlay (Allan.M.Findlay@st-andrews.ac.uk; tel +44 (0)1334 464011), Prof Elspeth Graham
(efg@st-andrews.ac.uk; tel +44 (0)1334 463908), or Prof Colin Hunter (Head of
Department/co-Head of School; ch69@st-andrews.ac.uk; tel +44 (0)1334 464017). The
research interests and recent publications of current members of staff in Geography and
Sustainable Development can be found on our website (http://www.st-andrews.ac.uk/gsd/).
Interview Date: Interviews for short-listed candidates will be held in November
2014. Successful candidates will be expected to start as soon as possible and not later than
August 2015.

Please indicate clearly in your application which post(s) you are applying for:
Lecturer – ML1317
Reader/Professor (2) – ML1283
Closing Date: Monday 6 October 2014
For further information see Further Particulars ML1317AC FPs.doc

Pigeon Sim goes to LonCon3

The pigeon sim visited the LonCon3 72nd World Science Fiction convention at the ExCeL centre last week. The event covered Thursday 14th to Monday 18th, with the weekend looking like the best part as lots of people were dressed up as science fiction characters. On the Friday shift we did get Thor on the pigeon sim, along with batman and spiderman though.

steve-IMG_20140813_160435

 

The picture above shows the build after Steve and Stephan had finished putting it all together on the Wednesday afternoon.

IMG_20140815_131901

IMG_20140815_140311

 

The white tents show the “village” area, while we were in the exhibitors’ area which is the elevated part at the top of the pink steps on the right. Our pigeon sim exhibit was about two rows behind the steps on the elevated level.

I’m slightly disappointed that Darwin’s pigeons (real live ones) weren’t arriving until Saturday, so I missed them, but the felt ones were very good:

IMG_20140815_131450

 

And finally, you can’t have a science fiction event without a Millennium Falcon made out of Lego:

IMG_20140815_140611

Made by Craig Stevens, it even folds in half so you can see inside: http://www.loncon3.org/exhibits.php#70 and http://modelmultiverse.com

OpenStreetMap studies (and why VGI not equal OSM)

As far as I can tell, Nelson et al. 2006 ‘Towards development of a high quality public domain global roads database‘ and Taylor & Caquard 2006 Cybercartography: Maps and Mapping in the Information Era are the first peer review papers that mention OpenStreetMap. Since then, OpenStreetMap received plenty of academic attention. More ‘conservative’ search engines such as ScienceDirect or Scopus find 286 and 236 peer review papers that mention the project (respectively). The ACM digital library finds 461 papers in the areas that are relevant to computing and electronics, while Microsoft Academic Research find only 112. Google Scholar lists over 9000 (!). Even with the most conservative version from Microsoft, we can see an impact on fields ranging from social science to engineering and physics. So lots to be proud about as a major contribution to knowledge beyond producing maps.

Michael Goodchild, in his 2007 paper that started the research into Volunteered Geographic Information (VGI), mentioned OpenStreetMap (OSM), and since then there is a lot of conflation between OSM and VGI. Some recent papers you can find statements such as ‘OpenstreetMap is considered as one of the most successful and popular VGI projects‘ or ‘the most prominent VGI project OpenStreetMap‘ so at some level, the boundary between the two is being blurred. I’m part of the problem – for example, in the title of my 2010 paper ‘How good is volunteered geographical information? A comparative study of OpenStreetMap and Ordnance Survey datasetsHowever, the more I was thinking about it, the more I am uncomfortable with this equivalence. I would think that the recent line from Neis & Zielstra (2013) is more accurate: ‘One of the most utilized, analyzed and cited VGI-platforms, with an increasing popularity over the past few years, is OpenStreetMap (OSM)‘. I’ll explain why.

Let’s look at the whole area of OpenStreetMap studies. Over the past decade, several types of research papers emerged.

There is a whole set of research projects that use OSM data because it’s easy to use and free to access (in computer vision or even string theory). These studies are not part of ‘OSM studies’ or VGI, as for them, this is just data to be used.

Edward Betts. CC-By-SA 2.0 via Wikimedia Commons

Second, there are studies about OSM data: quality, evolution of objects and other aspects from researchers such as Peter Mooney, Pascal Neis, Alex Zipf  and many others.

Thirdly, there are studies that also look at the interactions between the contribution and the data – for example, in trying to infer trustworthiness.

Fourth, there are studies that look at the wider societal aspects of OpenStreetMap, with people like Martin Dodge, Chris Perkins, and Jo Gerlach contributing in interesting discussions.

Finally, there are studies of the social practices in OpenStreetMap as a project, with the work of Yu-Wei Lin, Nama Budhathoki, Manuela Schmidt and others.

[Unfortunately, due to academic practices and publication outlets, a lot of these papers are locked behind paywalls, but this is another issue... ]

In short, this is a significant body of knowledge about the nature of the project, the implications of what it produces, and ways to understand the information that emerge from it. Clearly, we now know that OSM produce good data and know about the patterns of contribution. What is also clear that the patterns are specific to OSM. Because of the importance of OSM to so many applications areas (including illustrative maps in string theory!) these insights are very important. Some of them are expected to be also present in other VGI projects (hence my suggestions for assertions about VGI) but this need to be done carefully, only when there is evidence from other projects that this is the case. In short, we should avoid conflating VGI and OSM.


How making London greener could make Londoners happier – interactive map – The Guardian


How making London greener could make Londoners happier – interactive map
The Guardian
London – with all its tarmac, brick and glass – is actually 38.4% open space and ranks as the world's third greenest major city. Now Daniel Raven-Ellison wants to go further … and make Greater London a national park. His campaign and online petition ...

Tiling the Blue Marble

Following on from my last post about level of detail and Earth spheroids, here is the NASA Blue Marble texture applied to the Earth:

The top level texture is the older composite Blue Marble which shows blue oceans and a very well-defined North Pole ice sheet. Once the view zooms in, all subsequent levels of detail show the next generation Blue Marble from January 2004 with topography [link]. Incidentally, the numbers on the texture are the tile numbers in the format: Z_X_Y, where Z is the zoom level, so the higher the number, the more detailed the texture map. The green lines show the individual segments and textures used to make up the spheroid.

In order to create this, I’ve used the eight Blue Marble tiles which are each 21,600 pixels square, resulting in a full resolution texture which is 86,400 x 43,200 pixels. Rather than try and handle this all in one go, I’ve added the concept of “super-tiles” to my Java tiling program. The eight 21,600 pixel Blue Marble squares are the “super-tiles”, which themselves get tiled into a larger number of 1024 pixel quad tree squares which are used for the Earth textures. The Java class that I wrote to do this can be viewed here: ImageTiler.java. As you can probably see from the GitHub link, this is part of a bigger project which I was originally using to condition 3D building geometry for loading into the globe system. You can probably guess from this what the chunked LOD algorithms are going to be used for next?

Finally, one thing that has occurred to me is that tiling is a fundamental algorithm. Whether it’s cutting a huge texture into bits and wrapping it around a spheroid, or projecting 2D maps onto flat planes to make zoomable maps, the necessity to reduce detail to a manageable level is essential. Even the 3D content isn’t immune from tiling as we end up cutting geometry into chunks and using quad tree or oct tree algorithms. Part of the reason for this rests with the new graphics cards, which mean that progressive mesh algorithms like ROAM (Duchaineau et al) are no longer effective. Old progressive mesh algorithms would use CPU cycles to optimise a mesh before passing it on to the graphics card. The situation now with modern GPUs is that using a lot of CPU cycles to make a small improvement to a mesh before sending it to a powerful graphics card doesn’t result in a significant speed up. Chunked LOD works better, with blocks of geometry being loaded in and out of GPU memory as required. Add to this the fact that we’re working with geographic data and spatial indexing systems all the time and solutions to the level of detail problem start to present themselves.

 

Links:

NASA Blue Marble: http://visibleearth.nasa.gov/view_cat.php?categoryID=1484

Image Tiler: https://github.com/maptube/FortyTwoGeometry/blob/master/src/fortytwogeometry/ImageTiler.java

ROAM paper: http://dl.acm.org/citation.cfm?id=267028 (and: http://www.cognigraph.com/ROAM_homepage/ )

Happy 10th Birthday, OpenStreetMap!

Today, OpenStreetMap celebrates 10 years of operation as counted from the date of registration. I’ve heard about the project when it was in early stages, mostly because I knew Steve Coast when I was studying for my Ph.D. at UCL.  As a result, I was also able to secured the first ever research grant that focused on OpenStreetMap (and hence Volunteered Geographic Information – VGI) from the Royal Geographical Society in 2005. A lot can be said about being in the right place at the right time!

OSM Interface, 2006 (source: Nick Black)

OSM Interface, 2006 (source: Nick Black)

Having followed the project during this decade, there is much to reflect on – such as thinking about open research questions, things that the academic literature failed to notice about OSM or the things that we do know about OSM and VGI because of the openness of the project. However, as I was preparing the talk for the INSPIRE conference, I was starting to think about the start dates of OSM (2004), TomTom Map Share (2007), Waze (2008), Google Map Maker (2008).  While there are conceptual and operational differences between these projects, in terms of ‘knowledge-based peer production systems’ they are fairly similar: all rely on large number of contributors, all use both large group of contributors who contribute little, and a much smaller group of committed contributors who do the more complex work, and all are about mapping. Yet, OSM started 3 years before these other crowdsourced mapping projects, and all of them have more contributors than OSM.

Since OSM is described  as ‘Wikipedia of maps‘, the analogy that I was starting to think of was that it’s a bit like a parallel history, in which in 2001, as Wikipedia starts, Encarta and Britannica look at the upstart and set up their own crowdsourcing operations so within 3 years they are up and running. By 2011, Wikipedia continues as a copyright free encyclopedia with sizable community, but Encarta and Britannica have more contributors and more visibility.

Knowing OSM closely, I felt that this is not a fair analogy. While there are some organisational and contribution practices that can be used to claim that ‘it’s the fault of the licence’ or ‘it’s because of the project’s culture’ and therefore justify this, not flattering, analogy to OSM, I sensed that there is something else that should be used to explain what is going on.

TripAdvisor FlorenceThen, during my holiday in Italy, I was enjoying the offline TripAdvisor app for Florence, using OSM for navigation (in contrast to Google Maps which are used in the online app) and an answer emerged. Within OSM community, from the start, there was some tension between the ‘map’ and ‘database’ view of the project. Is it about collecting the data so beautiful maps or is it about building a database that can be used for many applications?

Saying that OSM is about the map mean that the analogy is correct, as it is very similar to Wikipedia – you want to share knowledge, you put it online with a system that allow you to display it quickly with tools that support easy editing the information sharing. If, on the other hand, OSM is about a database, then OSM is about something that is used at the back-end of other applications, a lot like DBMS or Operating System. Although there are tools that help you to do things easily and quickly and check the information that you’ve entered (e.g. displaying the information as a map), the main goal is the building of the back-end.

Maybe a better analogy is to think of OSM as ‘Linux of maps’, which mean that it is an infrastructure project which is expected to have a lot of visibility among the professionals who need it (system managers in the case of Linux, GIS/Geoweb developers for OSM), with a strong community that support and contribute to it. The same way that some tech-savvy people know about Linux, but most people don’t, I suspect that TripAdvisor offline users don’t notice that they use OSM, they are just happy to have a map.

The problem with the Linux analogy is that OSM is more than software – it is indeed a database of information about geography from all over the world (and therefore the Wikipedia analogy has its place). Therefore, it is somewhere in between. In a way, it provide a demonstration for the common claim in GIS circles that ‘spatial is special‘. Geographical information is infrastructure in the same way that operating systems or DBMS are, but in this case it’s not enough to create an empty shell that can be filled-in for the specific instance, but there is a need for a significant amount of base information before you are able to start building your own application with additional information. This is also the philosophical difference that make the licensing issues more complex!

In short, both Linux or Wikipedia analogies are inadequate to capture what OSM is. It has been illuminating and fascinating to follow the project over its first decade,  and may it continue successfully for more decades to come.


Visualising similarity

A model of the global economy is, by its very nature, an unwieldy object to work with. There are 40 countries (we want more; that’s coming next) and the economy of each country is described by the economic activity of 35 sectors.

Each sector in each country interacts with each other sector in each other country creating close to two million interactions.

This is great for wowing potential users of the model with the sheer scale and size of thing, but it makes life pretty hard if you want to ask a question like “what effect has a certain change had on… well, everything?”

This is hard because “everything” here encompasses two million numbers some of which will have gone up and others of which will have gone down.

If you don’t put any effort into visualisation, the output of the model looks absolutely horrible:

Output of a World Input-Output Table

Needless to say, picking interesting information out of such a mass of numbers involves some careful thought. (For the interested, what you’re seeing here is dollar-valued commodity flows between sectors within the Australian economy, the sectors being numbered 1 to 35.)

The paper I’m writing at the moment asks an even trickier question than “what’s going on?”. I’m trying to work out how our model compares with other, more standard, ways of doing this kind of thing. This means making the same change in two models and comparing the results.

One way to boil down lots of information into a far smaller number of ‘things’ is to rank the numbers you’re analysing. This just means putting the numbers into order then saying which number is biggest, which is second-biggest etc.

So in our case, if we make a change to the global economy, instead of looking at a horrifying table of numbers we can just say “Australia was the country most affected by the change. Netherlands was second, Spain tenth, Bulgaria 39th…” and so on.

The advantage to this approach is that, when comparing the results of two models, you can just compare the ranks of the countries and see if they’re similar. If they are, you might be justified in concluding that the models are doing more-or-less the same thing.

It also allows for some nice visualisation. If we write down all the countries in one column in the order of their rank (most-affected by some change we’ve made, to least-affected) using one model, and make a second column where the countries are ordered according to their rank using the other model, we can quickly see where the differences are, particularly if we draw nice lines between the countries to show how their position has changed.

Here’s the outcome of such an experiment:

chn_vec_rank_change_wiot_vs_model
The design for this visualisation was inspired by a similar thing in the work of Hidalgo and Hausmann, see here on p4!

It shows the results of reducing demand for Chinese vehicles by $1M on the global economy in 2010. The left-hand column shows the results using a traditional model (for the interested: it’s called a Multi-Region Input-Output model, or MRIO). The most-affected countries are at the top and the least-affected at the bottom. The right-hand column is the same but for our model.

With the exception of Slovakia, the results look pretty good. The ranks are generally pretty similar which is encouraging. We’re currently trying to find out what’s going on with Slovakia, and I’ll post here if we ever find out!

(Note that Taiwan is not in our model, because the UN doesn’t report trade data for it, as it deems it to be a part of China. I won’t be delving into this international controversy here!)

Building Virtual Worlds

The algorithms required to build virtual worlds like Google Earth and World Wind are really fascinating, but building a virtual world containing real-time city data is something that hasn’t yet been fully explored. Following on from the Smart Cities presentation in Oxford two weeks ago, I’ve taken the agent-based London Underground simulation and made some improvements to the graphics. While I’ve seen systems like Three.js, Unity and Ogre used for some very impressive 3D visualisations, what I wanted to do required a lower level API which allowed me to make some further optimisations.

Here is the London Underground simulation from the Oxford presentation:

The Oxford NCRM presentation showed the Earth tiled with a single resolution NASA Blue Marble texture, which was apparent as the view zoomed in to London to show the tube network and the screen space resolution of the texture map decreased.

The Earth texture and shape needs some additional work, which is where “level of detail” (LOD) comes in. The key point here is that most of the work is done by the shader using chunked LOD. If the Earth is represented as a spheroid using, for example 40 width segments and 40 height slices, then recursively divided using either quadtree or octtree segmentation, we can draw a more detailed Earth model as the user zooms in. By using the same number of points for each sub-mesh, only a single index buffer is needed for all LODs and no texture coordinates or normals are used. The shader uses the geodetic latitude and longitude calculated from the Cartesian coordinates passed for rendering, along with the patch min and max coordinates, to get the texture coordinates for every texture tile.

GeoGL_Smartie          GeoGL_WireframeSmartie

The two images above show the Earth using the the NASA Blue Marble texture. The semi-major axis has been increased by 50%, which gives the “smartie” effect and serves to show the oblateness around the equator. The main reason for doing this was to get the coordinate systems and polygon winding around the poles correct.

In order for the level of detail to work, a screen space error tolerance constant (labelled Tau) is defined. The rendering of the tiled earth now works by starting at the top level and calculating a screen space error based on the space that the patch occupies on the screen. If this is greater than Tau, then the patch is split into its higher resolution children, which are then similarly tested for screen space error recursively. Once the screen space error is within the tolerance, Tau, then the patch is rendered.

GeoGL_Earth    GeoGL_OctTreeEarth

The two images above show a correct rendering of the Earth, along with the underlying mesh. The wireframe shows a triangular patch on the Earth at the closest point to the viewer which is double the resolution (highlighted with red line). Octtree segmentation has been used for the LODs.

The code has been made as flexible as possible, allowing all the screen error tolerances, mesh slicing and quad/oct tree tiling to be configured to allow for as much experimentation as possible.

The interesting thing about writing a 3D system like this is that it shows that tiling is a fundamental operation in both 2D and 3D maps. In web-based mapping, 2D, maps are cut into tiles, which are usually 256 pixels square, to avoid having to load massive images onto the web browser. In 3D, the texture sizes might be bigger, but, bearing in mind that Google are reported to be storing around 70TB of texture data for the Earth, there is still the issue of out of core rendering. For the massive terrain rendering systems, management of data being moved between GPU buffers, main memory, local disk and the Internet is the key to performance. My feeling was that I would let Google do the massive terrain rendering and high resolution textures and just concentrate on building programmable worlds that allow exploration, simulation and experimentation with real data.

Finally, here’s a quick look at something new:

VirtualCity

The tiled Earth mesh uses procedural generation to create the levels of detail, so, extending this idea to procedural cities, we can follow the “CGA Shape” methodology outlined in the “Procedural Modeling of Buildings” paper to create our own virtual cities.

 

Useful Links:

http://tulrich.com/geekstuff/chunklod.html

http://tulrich.com/geekstuff/sig-notes.pdf 

http://peterwonka.net/Publications/pdfs/2006.SG.Mueller.ProceduralModelingOfBuildings.final.pdf

In Celebration of Peter Hall

Peter-Reading

Many will know that the world’s greatest planning academic passed away yesterday after a short illness. He was a great friend of CASA convincing me to come and run it in 1995 and doing much to establish and support what we continue to do: in building a science of cities. I will not eulogise Peter’s achievements but simply make this acknowledgement of his great impact. He was my mentor at Reading University in the late 1960s and 1970s where I was a lecturer (in Geography) and he the Professor. He got us all started in 1969 when he won a grant from the Centre for Environmental Studies to build land use transport models, thus nurturing a small community of scholars and practitioners that is still in existence and whose influence is slowly but surely increasing. I was fortunate that he appointed me as a research assistant to this grant in 1969 and much of what we do now in CASA emanates from those days. The picture above was taken two years ago when we celebrated his 80th birthday with a festschrift conference. Dave Foot and Erlet Cater are in the picture and they were key to our work on early land use transport models in those golden years which marked the 1960s

I have blogged the picture of the cake that was baked for Peter’s 80th birthday before but here it is again: in the image of Ebenezer Howard’s Tomorrow: A Peaceful Path to Reform.

PHCakeAnd those wishing to read the festschrift – the book that was produced for him to celebrate his career should get hold of a copy of the book

planning-img

 

Mapped: Journeys to Work

journey_to_work_web

Today the Office for National Statistics released the long awaited journey to work data collected by the 2011 Census in England and Wales. Here it is in all its glory. En masse you can really see the dominance of London in the South East as well as the likes of Manchester, Liverpool and Birmingham further north. If you want to pick out specific flows between areas you can use our “Commute.DataShine” tool developed by Oliver O’Brien.

Mapped: Journeys to Work

journey_to_work_web

Today the Office for National Statistics released the long awaited journey to work data collected by the 2011 Census in England and Wales. Here it is in all its glory. En masse you can really see the dominance of London in the South East as well as the likes of Manchester, Liverpool and Birmingham further north. If you want to pick out specific flows between areas you can use our “Commute.DataShine” tool developed by Oliver O’Brien.

From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy?

Faced with a vast array of choice when it comes to interacting with those around us, our favoured communication medium will often be the simplest, quickest and most immediately available. But as technology continually develops, the impact of modern communication tools on the quality and depth of interpersonal exchange is increasingly the subject of scrutiny. Further, it remains unclear what increased digital interaction will mean for our social relationships. Of course, for some types of interactions we may actively seek out the least empathic means of communication. Text, email and socialmedia are popular means for initiating breakups in intimate relationships; they are simple, allow for a clear message, and importantly, avoid the empathic strain that comes from seeing the consequence of your words face-to-face. Yet for those relationships we want to maintain, the affective impact of our chosen communication method is worth considering.

Break-ups via social media avoid the sharing of empathy associated with visual cues

Break-ups via social media avoid the sharing of empathy associated with visual cues

The social presence theory, introduced by Short, Williams and Christie (1976), argues that the fewer the number of ‘cue systems’ in a communication method, the less ‘warmth and involvement’ users typically feel. This means that as the method of communication used becomes more distanced from ‘fully cued’ face-to-face interaction, the level of empathy felt between users reduces. Applied to computer-mediated communication (CMC), this would imply that video-based interaction would lend itself to greater empathic connection than just voice capabilities, which in turn would be emotionally stronger than just text. As a case-in-point, business executives commonly prefer to make important deals in a face-to-face environment instead of via any digital means; it allows for more opportunity to read and respond to the body language of the other party.

What is more, it is not just the explicit use of technology that impacts the quality of interpersonal conversation. The ‘iPhone effect’ is a commonly recognised problem, occurring when one person looking at their phone in a social environment has a contagious, anti-social effect, ultimately ending all conversation and eye contact. Indeed, even the symbolic nature of a mobile phone appears to be enough to reduce the quality of face-to-face interactions; research has found that the presence of a mobile phone, even when it does not belong to either party, reduces the empathy reportedly felt between two face-to-face communicators. This finding is attributed to a diversion of attention from the immediate exchange towards an item which symbolises instant information and hyper-connectivity, and makes individuals more likely to miss subtle cues, facial expressions and vocal inclinations. Thus, the social presence theory has explanatory power, and intuitively has merit.

However, the huge popularity of online forums, virtual games and online relationships indicate that in some contexts, digital methods may too lend themselves to the expression of empathy. The hyperpersonal model of CMC, introduced by Joseph Walther, proposes a set of processes to explain how CMC may create an environment where digital text-based communication lends itself to greater desirability and intimacy than equivalent offline interactions. Walther’s model outlines four components that contribute to the process: overattributions of recipient similarity; selective self-presentation and disclosure; thoughtful and reflective message construction; and the idealised impressions of others we interact with.

The anonymity that comes from text-based online communication creates a different dynamic for exchanges. In an anonymous environment, users can feel liberated from judgement on their words. Whilst personal profiles on social networks remain ever popular, giving an opportunity to present the individual in whichever way they choose, there has been a trend towards the use of anonymous social media, with the anonymous sharing apps ‘Whisper’ and ‘Pencourage’, dubbed the ‘anti-social real life Facebook’ attracting millions of users. The anonymity that is provided in online communities can facilitate deep, empathic connections, as it allows people to disclose more than they feel they would be able to in real life. This is attributed to reduced vulnerability, where what you may say or do online cannot be associated with the rest of your (offline) life. People do not have to worry about their non-verbal cues when typing a message; the fear of not using the right words, or losing control of ones emotions when speaking, is gone. This process may be particularly prevalent in online support communities, for example cancer support groups, which have been the subject of extensive research into how sensitive online messages are expressed and received.

Yet this anonymity does have a dark side. While the online disinhibition effect in some instances creates an environment for openness and support, the anonymity also lends itself to cyberbullying. ‘Yik Yak’, a localised anonymous sharing app, facilitated cyberbullying in high schools to such a degree that the creators had to respond by geofencing schools so the app could not be accessed on the premises. The mask of anonymity creates an environment where individuals don’t have to own their behaviour and it can be kept entirely separate from their offline identity. There are no repercussions for behaviour, and no clear authority. In addition, users are distanced from seeing the offline reactions to their online behaviour, creating an illusion that the two worlds are separate.

Online anonymity permits freedom to express oneself with no offline repercussions

Online anonymity permits freedom to express oneself with no offline repercussions

For better or for worse, digital communication is here to stay. Constantly developing technology allows for ever-changing methods of interaction with close friends and strangers alike, with huge potential for both increasingly life-like digital interactions and more creative text-based communication – but are these developments necessary to enhance communicative empathy? The dichotomy between the virtues and vices of digitised anonymity tends to sway to one side or the other depending on the context in which anonymity is afforded. The market for increasing empathy in digital interactions is clearly expanding, with more and more social cues being made possible via digital links. The idea of the ‘virtual hug’ has been popular in internet chatrooms for decades; now technology is making this a reality, with devices such as the Kickstarter project ‘Frebble’, a handheld accessory which allows two users to ‘hold hands’ regardless of their physical distance, now on the scene. In these cases, decreased anonymity is the goal. However, it should not be forgotten that there are some situations in which anonymity works positively for the expression of empathy, facilitating deeper disclosure on sensitive topics, where digital disinhibition is critical.

The post From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy? appeared first on CEDE.

From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy?

Faced with a vast array of choice when it comes to interacting with those around us, our favoured communication medium will often be the simplest, quickest and most immediately available. But as technology continually develops, the impact of modern communication tools on the quality and depth of interpersonal exchange is increasingly the subject of scrutiny. Further, it remains unclear what increased digital interaction will mean for our social relationships. Of course, for some types of interactions we may actively seek out the least empathic means of communication. Text, email and socialmedia are popular means for initiating breakups in intimate relationships; they are simple, allow for a clear message, and importantly, avoid the empathic strain that comes from seeing the consequence of your words face-to-face. Yet for those relationships we want to maintain, the affective impact of our chosen communication method is worth considering.

Break-ups via social media avoid the sharing of empathy associated with visual cues

Break-ups via social media avoid the sharing of empathy associated with visual cues

The social presence theory, introduced by Short, Williams and Christie (1976), argues that the fewer the number of ‘cue systems’ in a communication method, the less ‘warmth and involvement’ users typically feel. This means that as the method of communication used becomes more distanced from ‘fully cued’ face-to-face interaction, the level of empathy felt between users reduces. Applied to computer-mediated communication (CMC), this would imply that video-based interaction would lend itself to greater empathic connection than just voice capabilities, which in turn would be emotionally stronger than just text. As a case-in-point, business executives commonly prefer to make important deals in a face-to-face environment instead of via any digital means; it allows for more opportunity to read and respond to the body language of the other party.

What is more, it is not just the explicit use of technology that impacts the quality of interpersonal conversation. The ‘iPhone effect’ is a commonly recognised problem, occurring when one person looking at their phone in a social environment has a contagious, anti-social effect, ultimately ending all conversation and eye contact. Indeed, even the symbolic nature of a mobile phone appears to be enough to reduce the quality of face-to-face interactions; research has found that the presence of a mobile phone, even when it does not belong to either party, reduces the empathy reportedly felt between two face-to-face communicators. This finding is attributed to a diversion of attention from the immediate exchange towards an item which symbolises instant information and hyper-connectivity, and makes individuals more likely to miss subtle cues, facial expressions and vocal inclinations. Thus, the social presence theory has explanatory power, and intuitively has merit.

However, the huge popularity of online forums, virtual games and online relationships indicate that in some contexts, digital methods may too lend themselves to the expression of empathy. The hyperpersonal model of CMC, introduced by Joseph Walther, proposes a set of processes to explain how CMC may create an environment where digital text-based communication lends itself to greater desirability and intimacy than equivalent offline interactions. Walther’s model outlines four components that contribute to the process: overattributions of recipient similarity; selective self-presentation and disclosure; thoughtful and reflective message construction; and the idealised impressions of others we interact with.

The anonymity that comes from text-based online communication creates a different dynamic for exchanges. In an anonymous environment, users can feel liberated from judgement on their words. Whilst personal profiles on social networks remain ever popular, giving an opportunity to present the individual in whichever way they choose, there has been a trend towards the use of anonymous social media, with the anonymous sharing apps ‘Whisper’ and ‘Pencourage’, dubbed the ‘anti-social real life Facebook’ attracting millions of users. The anonymity that is provided in online communities can facilitate deep, empathic connections, as it allows people to disclose more than they feel they would be able to in real life. This is attributed to reduced vulnerability, where what you may say or do online cannot be associated with the rest of your (offline) life. People do not have to worry about their non-verbal cues when typing a message; the fear of not using the right words, or losing control of ones emotions when speaking, is gone. This process may be particularly prevalent in online support communities, for example cancer support groups, which have been the subject of extensive research into how sensitive online messages are expressed and received.

Yet this anonymity does have a dark side. While the online disinhibition effect in some instances creates an environment for openness and support, the anonymity also lends itself to cyberbullying. ‘Yik Yak’, a localised anonymous sharing app, facilitated cyberbullying in high schools to such a degree that the creators had to respond by geofencing schools so the app could not be accessed on the premises. The mask of anonymity creates an environment where individuals don’t have to own their behaviour and it can be kept entirely separate from their offline identity. There are no repercussions for behaviour, and no clear authority. In addition, users are distanced from seeing the offline reactions to their online behaviour, creating an illusion that the two worlds are separate.

Online anonymity permits freedom to express oneself with no offline repercussions

Online anonymity permits freedom to express oneself with no offline repercussions

For better or for worse, digital communication is here to stay. Constantly developing technology allows for ever-changing methods of interaction with close friends and strangers alike, with huge potential for both increasingly life-like digital interactions and more creative text-based communication – but are these developments necessary to enhance communicative empathy? The dichotomy between the virtues and vices of digitised anonymity tends to sway to one side or the other depending on the context in which anonymity is afforded. The market for increasing empathy in digital interactions is clearly expanding, with more and more social cues being made possible via digital links. The idea of the ‘virtual hug’ has been popular in internet chatrooms for decades; now technology is making this a reality, with devices such as the Kickstarter project ‘Frebble’, a handheld accessory which allows two users to ‘hold hands’ regardless of their physical distance, now on the scene. In these cases, decreased anonymity is the goal. However, it should not be forgotten that there are some situations in which anonymity works positively for the expression of empathy, facilitating deeper disclosure on sensitive topics, where digital disinhibition is critical.

The post From the ‘iPhone effect’ to the ‘virtual hug’: Is technology restricting or increasing our empathy? appeared first on CEDE.

London Words

Screen Shot 2014-07-21 at 15.46.02

Above is a Wordle of the messages displayed on the big dot-matrix displays (aka variable message signs) that sit beside major roads in London, over the last couple of months. The larger the word, the more often it is shown on the screens.

The data comes from Transport for London via their Open Data Users platform, through CityDashboard‘s API. We now store some of the data behind CityDashboard, for London and some other cities, for future analysis into key words and numbers for urban informatics.

Below, as another Wordle, are the top words used in tweets from certain London-centric Twitter accounts – those from London-focused newspapers and media organisations, tourism organisations and key London commentators. Common English words (e.g. to, and) are removed. I’ve also removed “London”, “RT” and “amp”.

Screen Shot 2014-07-21 at 15.56.57

Some common words include: police, tickets, City, crash, Boris, Thames, Park, Festival, Bridge, bus, Kids.

Finally, here’s the notes that OpenStreetMap editors use when they commit changes to the open, user-created map of the world, for the London area:

Screen Shot 2014-07-21 at 16.10.50

Transport and buildings remain a major focus of the voluntary work on completing and maintaining London’s map, that contributors are carrying out.

There is no significance to the colours used in the graphics above. Wordle is a quick-and-dirty way to visualise data like this, we are looking at more sophisticated, and “fairer” methods, as part of ongoing research.

This work is preparatory work for the Big Data and Urban Informatics workshop in Chicago later this summer.

Thanks to Steve and the Big Data Toolkit, which was used in the collection of the Twitter data for CityDashboard.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien
Electric Tube
London North/South

The 2011 Area Classification for Output Areas

The 2011 Area Classification for Output Areas (2011 Output Area Classification or 2011 OAC) was released by the Office for National Statistics at 9.30am on the 18th July 2014.

Documentation, downloads and other information regarding the 2011 OAC are available from the official ONS webpage: http://www.ons.gov.uk/ons/guide-method/geography/products/area-classifications/ns-area-classifications/ns-2011-area-classifications/index.html.

Further information and a larger array of 2011 OAC resources can also be found at http://www.opengeodemographics.com.

Additionally, an interactive map of the 2011 OAC is available at http://public.cdrc.ac.uk.

For the 2011 release it has been agreed that a less centralised version of the OAC User Group will be beneficial. The new home of the OAC User Group is located at https://plus.google.com/u/0/communities/111157299976084744069 and enables a more decentralised way of organising meetings / events or accessing supporting materials. If you have any questions or comments regarding the classification then this is the place to visit.

This means areaclassification.org.uk will no longer be maintained. There will be no future posts and no upkeep of links or other materials currently available. Any bookmarks you have for this page should be redirected to http://www.opengeodemographics.com

Temporal OAC

As part of an ESRC Secondary Data Analysis Initiative grant Michail Pavlis, Paul Longley and I have been working on developing methods by which temporal patterns of geodemographic change can be modelled.

Much of this work has been focused on census based classifications, such as the 2001 Output Area Classification (OAC), and the 2011 OAC released today. We have been particularly interested in examining methods by which secondary data might be used to create measures enabling the screening of small areas over time as uncertainty builds as a result of residential structure change. The writeup of this work is currently out for review, however, we have placed the census based classification created for the years 2001 - 2011 on the new public.cdrc.ac.uk website, along with a change measure.

Some findings

  • 8 Clusters were found to be of greatest utility for the description of OA change between 2001 and 2011 and included
    • Cluster 1- "Suburban Diversity"
    • Cluster 2- "Ethnicity Central"
    • Cluster 3- "Intermediate Areas"
    • Cluster 4- "Students and Aspiring Professionals"
    • Cluster 5- "County Living and Retirement"
    • Cluster 6- "Blue-collar Suburbanites"
    • Cluster 7- "Professional Prosperity"
    • Cluster 8 – "Hard-up Households"

A map of the clusters in 2001 and 2011 for Leeds are as follows:

  • The changing cluster assignment between 2001 and 2011 reflected
    • Developing "Suburban Diversity"
    • Gentrification of central areas, leading to growing "Students and Aspiring Professionals"
    • Regional variations
      • "Ethnicity Central" more stable between 2001 and 2011 in the South East and London, than in the North West and North East, perhaps reflecting differing structural changes in central areas (e.g. gentrification)
      • "Hard-up Households” are more stable in the North West and North East than the South East or London; South East, and acutely so in London, flows were predominantly towards “Suburban Diversity”

Google’s 3D London gets better

We woke this morning to find Google has made some improvements to its 3D model of London in Google Earth. All the city’s buildings are now based on 45-degree aerial imagery, which should mean a marked improvement in accuracy of building shapes. So how much has it improved?

Firstly to compare the new Google London against an earlier version of itself, here are screenshots of the British Museum:

2010

British_Museum_GE_2010

2014

British_Museum_GE_2014

A mixed improvement. The computer game-style model of 2010 (I believe partly the product of crowdsourced individual 3D building models) is replaced by a continuous meshed surface. But as Apple found two years ago (embarrassingly) this method is prone to the inclusion of errors and artefacts – the BM’s roof is a big improvement, but its columns are now wonky and the complex shapes in the neighbouring rooftops are a bit messy. But we should recognised that this is an inevitable consequence of the shift to a more fully automated process – presumably the constraints on data size and processing power limit result in a trade-off between its resolution and accuracy. But to remedy this there seems to have been some manual correction to parts of the model – e.g. the London Eye looks touched up (despite some tree-coloured spokes):

London Eye

To compare the model with its main competitor, Apple Maps I’ve done a few screenshots, firstly

St Paul’s Cathedral

Google Earth

St_Pauls_GE

Apple Maps

St_Pauls_Apple

Google’s far superior St Paul’s again suggests manual correction or, possibly, their retention of the original model.

10 Downing Street

For anyone who hasn’t been there (author included) this is Mr Cameron’s back garden.

Google

No.10_GE

Apple

No.10_Apple

Apple have clearly done a better job on the official residence of the First Lord of the Treasury. The contrast and brightness make for a much clearer and realistic depiction, partly due to Apple’s higher resolution and partly because the time of day of Google’s survey meant more shadows.

Center Point

Chosen because Google are unlikely to have manicured a building site. As you can see Google still have some work to equal Apple’s resolution.

Google

Center_Point_GE

Apple

Center_Point_Apple

Buckingham Palace

Last but not least, the house of someone called Elizabeth Windsor who judging by Google’s model likes to have receptions in her expansive back garden.

Google

Palace_GE

Apple

Palace_Apple

Overall I think it’s fair to say a necessary improvement by Google but still very much work-in-progress. It is worth mentioning that Google provides a more move immersive environment (the interface lets the camera go lower and to angle horizontally) whereas Apple’s feel like a diorama (e.g. no sky), albeit one that interacts much more smoothly. And of course Google Earth is much more than just a 3D map. But given their better resolution and seeming clarity of imagery in my opinion Apple keeps the crown for best 3D model.

CASA at the Research Methods Festival 2014

As you can see from the image below, we spent three days at the NCRM Research Methods Festival in Oxford (#RMF14) last week.

RMF2014_1

In addition to our presentations in the “Researching the City” session on the Wednesday morning, we were also running a Smart Cities exhibition throughout the festival showcasing how the research has been used to create live visualisations of a city. This included the now famous “Pigeon Simulator”, which allows people to fly around London and is always very popular. The “About CASA” screen on the right of the picture above showed a continuous movie loop of some of CASA’s work.

RMF2014_3

The exhibition was certainly very busy during the coffee breaks and, as always at these types of events, we had some very interesting conversations with people about the exhibits. One discussion with a lawyer about issues around anonymisation of Big Datasets and how you can’t do it in practice made me think about the huge amount of information that we have access to and what we can do with it. Also, the Oculus Rift 3D headset was very popular and over the three days we answered a lot of questions from psychology researchers about the kinds of experiments you could do with this type of device. The interesting thing is that people trying out the Oculus Rift for the first time tended to fall into one of three categories: can’t see the 3D at all, see 3D but with limited effect, or very vivid 3D experience with loss of balance. Personally, I think it’s part psychology and part eye-sight.

Next time I must remember to take pictures when there are people around, but the sweets box got down to 2 inches from the bottom, so it seems to have been quite popular.

RMF2014_4

 

 

We had to get new Lego police cars for the London Riots Table (right), but the tactile nature of the Roving Eye exhibit (white table on the left) never fails to be popular. I’ve lost count of how many hours I’ve spent demonstrating this, but people always seem to go from “this is rubbish, pedestrians don’t behave like that”, through to “OK, now I get it, that’s really quite good”. The 3D printed houses also add an element of urban planning that wasn’t there when we used boxes wrapped in brown paper.

RMF2014_2

The iPad wall is shown on the left here with the London Data Table on the right. Both show a mix of real-time visualisation and archive animations. The “Bombs dropped during the Blitz” visualisation on the London Data Table which was created by Kate Jones (http://bombsight.org ) was very popular, as was the London Riots movie by Martin Austwick.

All in all, I think we had a fairly good footfall despite the sunshine, live Jazz band and wine reception.

 

 

 

Vespucci Institute on citizen science and VGI

The Vespucci initiative has been running for over a decade, bringing together participants from wide range of academic backgrounds and experiences to explore, in a ‘slow learning’ way, various aspects of geographic information science research. The Vespucci Summer Institutes are week long summer schools, most frequently held at Fiesole, a small town overlooking Florence. This year, the focus of the first summer institute was on crowdsourced geographic information and citizen science.

101_0083The workshop was supported by COST ENERGIC (a network that links researchers in the area of crowdsourced geographic information, funded by the EU research programme), the EU Joint Research Centre (JRC), Esri and our Extreme Citizen Science research group. The summer school included about 30 participants and facilitators that ranged from master students students that are about to start their PhD studies, to established professors who came to learn and share knowledge. This is a common feature of Vespucci Institute, and the funding from the COST network allowed more early career researchers to participate.

Apart from the pleasant surrounding, Vespucci Institutes are characterised by the relaxed, yet detailed discussions that can be carried over long lunches and coffee breaks, as well as team work in small groups on a task that each group present at the end of the week. Moreover, the programme is very flexible so changes and adaptation to the requests of the participants and responding to the general progression of the learning are part of the process.

This is the second time that I am participating in Vespucci Institutes as a facilitator, and in both cases it was clear that participants take the goals of the institute seriously, and make the most of the opportunities to learn about the topics that are explored, explore issues in depth with the facilitators, and work with their groups beyond the timetable.

101_0090The topics that were covered in the school were designed to provide an holistic overview of geographical crowdsourcing or citizen science projects, especially in the area where these two types of activities meet. This can be when a group of citizens want to collect and analyse data about local environmental concerns, or oceanographers want to work with divers to record water temperature, or when details that are emerging from social media are used to understand cultural differences in the understanding of border areas. These are all examples that were suggested by participants from projects that they are involved in. In addition, citizen participation in flood monitoring and water catchment management, sharing information about local food and exploring data quality of spatial information that can be used by wheelchair users also came up in the discussion. The crossover between the two areas provided a common ground for the participants to explore issues that are relevant to their research interests. 

2014-07-07 15.37.55The holistic aspect that was mentioned before was a major goal for the school – so to consider the tools that are used to collect information, engaging and working with the participants, managing the data that is provided by the participants and ensuring that it is useful for other purposes. To start the process, after introducing the topics of citizen science and volunteered geographic information (VGI), the participants learned about data collection activities, including noise mapping, OpenStreetMap contribution, bird watching and balloon and kite mapping. As can be expected, the balloon mapping raised a lot of interest and excitement, and this exercise in local mapping was linked to OpenStreetMap later in the week.

101_0061The experience with data collection provided the context for discussions about data management and interoperability and design aspects of citizen science applications, as well as more detailed presentations from the participants about their work and research interests. With all these details, the participants were ready to work on their group task: to suggest a research proposal in the area of VGI or Citizen Science. Each group of 5 participants explored the issues that they agreed on – 2 groups focused on a citizen science projects, another 2 focused on data management and sustainability and finally another group explored the area of perception mapping and more social science oriented project.

Some of the most interesting discussions were initiated at the request of the participants, such as the exploration of ethical aspects of crowdsourcing and citizen science. This is possible because of the flexibility in the programme.

Now that the institute is over, it is time to build on the connections that started during the wonderful week in Fiesole, and see how the network of Vespucci alumni develop the ideas that emerged this week.

 


From Oculus Rift to Facebook: finding money and data in the crowd – Times Higher Education


Times Higher Education

From Oculus Rift to Facebook: finding money and data in the crowd
Times Higher Education
Crowdsourcing could revolutionise the way scholarly research is funded and conducted over the next few years, an academic has suggested. Andy Hudson-Smith, director and deputy chair of the Bartlett Centre for Advanced Spatial Analysis at University ...

The latest outputs from researchers, alumni and friends at UCL CASA