Hex Bug Spiders with Computer Vision

Following on from my previous post about controlling the Hex Bug spiders from a computer, I’ve added a computer vision system using a cheap web cam to allow them to be tracked. The web cam that I’m using is a Logitech C270, but mainly because it was the cheapest one in the shop (£10).

I’ve added a red cardboard marker to the top of the spider and used the OpenCV library in Java through the JavaCV port. The reason for using Java was to allow for linking to agent based modelling software like NetLogo at a later date. You can’t see the web cam in the picture because it is suspended on the aluminium pole to the left, along with the projector and a Kinect. The picture shows the Hex Bug spider combined with Martin Austwick‘s Roving Eye exhibit from the CASA Smart Cities conference in April.

The Roving Eye exhibit is a Processing sketch running on the iMac which projects ‘eyeballs’ onto the table. It uses a Kinect camera so that the eyeballs avoid any physical objects placed on the table, for example a brown paper parcel or a Hex Bug spider.

Because of time constraints I’ve used a very simple computer vision algorithm using moments to calculate the centre of red in the image (spider) and the centre of blue in the image (target). This is done by transforming the RGB image into HSV space and thresholding to get a red only image and a blue only image. Then the moments calculation is used to find the centres of the red and blue markers in camera space. In the image above you can see the laptop running the spider control software with the camera representation on the screen showing the spider and target locations.

Once the coordinates of the spider and target are known, a simple algorithm is used to make the spider home in on the blue marker. This is complicated by the fact that the orientation of the spider can’t be determined just from the image (as it’s a round dot), so I retain the track of the spider as it moves to determine its heading. The spider track and direction to target vectors are used to tell whether a left or right rotation command is required to head towards the target, but, as you can see from the following videos, the direction control is very crude.

The following videos show the system in action:

Links:

Martin Austwick’s Sociable Physics Blog: http://sociablephysics.wordpress.com/

Smart Cities Event at Leeds City Museum: http://www.geog.leeds.ac.uk/research/events/conferences/smart-cities-bridging-the-physical-digital/