More tea please.

Sneak Peek at the SR2013 Robots

Every year we design a new game for the next batch of Student Robotics teams. This design process is always tricky, and we’ve learnt considerably from our mistakes in the past. The game has to be well-matched with what the teams can realistically achieve with the resources and time they have. It’s a challenging problem, and the rapidly changing world of cheap hardware that teams can load onto their robots makes it even more interesting.

Since our teams are comprised of 16-18 year-olds, it’s pretty standard for them to leave building their robots until the last few weeks before the competition. This means that we really don’t start to see if we’ve set the game at the right level until right before the competition.

We’re two weeks away from this year’s competition. Over the last couple of weeks we’ve seen some relieving progress from teams. They’re definitely meeting the challenge we’ve set them! I thought I’d share some videos from a few of the teams here.


This video has got to be the best one I’ve seen so far this year. QMC manage to align with and pick up a token, before placing it on a pedestal. I must remind you that these robots are entirely autonomous — there’s no remote control here! In this video, QMC also manage to completely avoid a common failure we see in robot testing; they don’t keep picking up their robot to reorient it towards a token. They leave it to do it’s own thing. Sounds like a reasonably obvious thing to do, but we see this happen so often!


Team MAI have done some similarly impressive work. The video below shows their robot locating a token, and then picking it up using a sucker mounted on top of their awesome scissor lift. Their robot then carries the token over to a target location. Really cool.


So GMR decided to be the first SR team to build and enter a hovercraft into the competition. It’s a very exciting robot and the GMR guys are certainly very motivated to get it to go. Here’s the first time they got it to hover:

A list of this year’s teams can be found on our website.

Posted at 5:26 pm on Friday 29th March 2013

Finding lens distortion parameters

I’ve recently found myself needing to remove some barrel distortion (a form of radial distortion) from some images. “fulla” is a tool that comes with hugin that will correct both radial distortion and chromatic aberration (which is where different wavelengths/colours are distorted by different amounts). fulla seems to be quite good, but in order to use it, you will need to know the coefficients to feed it for your particular set-up.

There’s a tool called PTOptimizer for working out the coefficients to use. I found its documentation somewhat cryptic, and so I thought I’d attempt to save some other people some time by describing what I did to calculate the coefficients to pass to fulla.

  1. Take a calibration photo. I drew a grid of straight lines on a piece of A3 paper and photographed it with my camera. It doesn’t matter what orientation the lines are in, just that they’re straight in the real world. You’ll want your lines to cover your image reasonably well.

  2. Trace the lines in the gimp. Various other websites will talk about using hugin, or some other tool to do the tracing. In the end I found it easiest to draw a path in gimp over each line, then export all the paths from gimp (right click on a path and click “Export path…”).

  3. Convert your trace into a “PTO” file. PTOptimizer needs to be fed these paths along with some other parameters in a file of the format described here. I wrote a short python script to convert the SVG gimp exported into a PTO file. An important piece of information that took me forever to work out was how to encode this into some “c lines” properly. Each c-line only takes two control points (the points from the lines you drew earlier). So, there should be multiple c-lines per line from your image. These c-lines are grouped together by their type. The type is encoded in the “tN” field of the c-line, where N is an integer of 3 or above.

    For example, here are two sets of c-lines describing two calibration lines:

    # First calibration line:
    c n0 N0 x937.55 y2301.82 X936.36 Y2268.73 t4
    c n0 N0 x936.36 y2245.09 X935.55 Y2200.09 t4
    c n0 N0 x938.45 y278.27 X939.36 Y230.73 t4
    c n0 N0 x940.55 y184.55 X941.91 Y105.82 t4
    # Second calibration line:
    c n0 N0 x2020.45 y121.91 X2022.55 Y170.55 t5
    c n0 N0 x2026.18 y239.73 X2029.45 Y309.27 t5
    c n0 N0 x2100.56 y2091.75 X2102.31 Y2141.00 t5
    c n0 N0 x2104.44 y2202.69 X2105.62 Y2251.25 t5

  4. Fill out the PTO header. There are some other pieces of information that PTOptimizer needs. The docs for this are mostly sufficient. Make sure you set a, b, and c to be non-zero, otherwise you’ll find (as I did) that the optimiser doesn’t get anywhere. Here’s an example header:

    # The width, height, and field-of-view of our image
    p f0 w3000 h4759 v94.04
    # Input image information -- importantly a, b, c are non-zero
    i w3456 h2304 f0 v94.04 a0.001 b0.001 c0.001
    #We want the a, b, and c parameters to be optimised:
    v a0 b0 c0
  5. Run PTOptimizer against the PTO file. It’ll do a load of things, and then write its results onto the end of the PTO file you gave it. Dig around in there to find a line that looks like this:
    # Polynomial Coefficients: a   -0.000923 (*); b   -0.013905 (*); c   -0.004376 (*)
  6. Calculate the d coefficient. According to the panotools lens correction model, this should be 1-(a+b+c).
  7. Construct a fulla command line. Take the a, b, c, and d values you’ve got and stick them into fulla:
    fulla -g -0.000923:-0.013905:-0.004376:1.019204 myfile.jpg -o somewhere.jpg

    I’d run your calibration image through it to see how straight the lines get.

For chromatic aberration correction, see this page.

Job done.

Posted at 12:32 am on Wednesday 5th December 2012

Laser Your Knitting Infrastructures

If you’re a knitting addict, like Elisabeth, then you might find yourself having to deal with yarn in big bundles called hanks. You might come across hanks if you’re making your own yarn, and some yarn vendors only sell it in this form. The hank isn’t a particularly good format for knitting from, as the wool will tangle quite quickly. You want your yarn to be in a ball.

You can wind yarn balls by hand, but you’ll quickly get very bored of this. So, you’ll want a ball-winder. You can buy these, but you can of course make your own (you can even 3D print one). Elisabeth had one she’d made that could be driven by a belt connected to a spinning wheel. Unfortunately, it was a bit rickety, which I believe meant that it could only actually be rotated by hand else the belt would fall of. I harvested the bearing from an IKEA lazy susan and reduced the ricketyness of the ball winder such that it was more suited to higher speed operation. We also motorised it. As luck would have it, I’d already got a motor with a pulley and compatible belt from my recent temperature-controlled oven project.

Now that we’d got a high-speed ball-winder, some new issues appeared. It now took three people to perform a ball-winding operation. One person would hold and untangle the hank, one would manage the tension and position of the yarn as it entered the winder, and the other would observe the other two and manage the speed of the winder appropriately. The solution to this problem in knitting-land is to use a structure called a swift. A swift holds a hank of yarn such that it can be easily unwound. It’s essentially a variable-diameter wheel around which the hank is wrapped. When its loaded with a hank, one just needs to pull the end of the yarn, and the swift will rotate allowing the yarn to be unwound. This avoids all the tangling of attempting to do it by hand because the hank maintains its shape until it is completely unwound.

So, we decided to build a swift. More specifically, we decided to laser-cut a swift. Johannes and I came up with a conceptual design for the swift relatively quickly. Johannes did the CAD (he seems to enjoy the QCAD interface more than I do). We’ve uploaded the design for our swift to thingiverse so others may make it should they wish.

A nice bit of the swift’s design is the use of cable ties as hinges for the top-most joints. Although they may look like they’re a bit clunky, they’re actually surprisingly smooth!

Here’re the swift and baller in operation:

Posted at 5:16 pm on Thursday 29th December 2011

Temperature-Controlled Wedding Cake Baking

My friend Elisabeth was recently responsible for baking a layer of a wedding cake. Wedding cakes are not like normal cakes. They tend to be quite large and regarded as quite important. So Elisabeth did a couple of practice runs of baking this cake, which was good for the occupants of this house as there was much yummy cake to go round. During these practice sessions it became apparent that the temperature control of our kitchen’s cheap gas oven was not suited to mission-critical situations. Using hot water causes our on-demand boiler to come on, which lowers the gas pressure reaching the oven, resulting in it cooling down. I had a shower during the second attempt, which resulted in the pressure going so low that the oven turned itself completely off. It turns out that, like electronic components, cakes also have temperature profiles that should be stuck to when processing them!

Ideally the oven would control the amount of gas being burnt based on the difference between the temperature within the oven and the desired temperature. However, cheap ovens are cheap. They work on the assumption that the gas supply has a constant pressure, and the room temperature is constant over all time. Furthermore, again because they’re cheap, these ovens do nothing in response to differing thermal masses. It seems that our cheap oven’s “temperature” knob just adjusts the target pressure of a gas pressure regulator. The relationship between the position of the temperature knob and the gas pressure is extremely non-linear, and bears much resemblance to its close relative, the all-too-familiar and highly-sensitive shower temperature control.

I decided that was was needed was a better temperature controller for the oven. This was quite clearly a situation demanding the fusion of some amusing mechanical hacking and electronics. Pulling the knob off the oven revealed a brass rod with two flats on its sides. I decided that the easiest way for me to motorise this was to mount a pulley on it. With Jeff having recently moved to London, I’d been left particularly latheless and so scoured the internets for suitable pulleys. The online shopping experience for mechanical parts isn’t anything like the world of online shopping for electronic parts. It is a world pretty much completely devoid of parametric search, and is full of websites that feel like they’re still being served from the same BBC Micro they were designed on… (That’s not to say that online electronic component shopping is a gold standard however — it’s got an awful long way to go, and unfortunately competition is not fierce within it.) Also, one gets the disconcerting sense that there are still a lot of fax machines involved in the mechanical parts world. After pushing myself through the HPC Gears website, which is still very much adheres to ye olde “the paper catalogue is the one true way” line of thinking (sigh), I found the pulley I was looking for. I was pleasantly surprised to find that I could order a single pulley, rather than 9000.

So, it was Sunday evening. The cake needed to be baked on Wednesday. Everything I ordered at this point would arrive on Tuesday (because ubiquitous 24/7 UAV-based shipping networks still belong to the future…). So I spent a couple of hours making sure I’d either got or ordered all the bits I needed. This is the final list of things that I ended up using:

Luckily, the postal service functioned well, and everything did indeed arrive by Tuesday. So on Tuesday evening, Johannes and I hacked the mechanical situation together:

Next came the electronic and software side of things. This was reasonably straightforward. The MAX6675 has a simple SPI interface, which we talked to using the FTDI adapter. Hacking this together at high-speed, I decided to just use the bit-bang mode of the adapter. This is the most conceptually simple mode, as one just sets, clears, and reads pins on the adapter. I could have got the FTDI module to perform all the clocking of the MAX6675 itself, but this additional complexity really wasn’t worth it for an unneeded efficiency/speed improvement.

Interfacing with the motor controller didn’t take too long. The documentation about the Pololu motor controller seems unnecessarily convoluted to me. All I wanted was a table of the different commands that I could send to it over its USB connection. Pololu give a long manual for the board, in which the different commands are spread out, and interlaced with screenshots of various graphical programs that I had no interest in using.

Initially, I had plans to add a AS5030 magnetic rotary to provide feedback on the position of the temperature knob. It turned out the end-stops of the oven’s knob were sufficient for us to drive into them a bit. So we could easily get to “min” and “max” without feedback. It was about three in the morning by the time we’d got the mechanical, motor control, and thermocouple situation configured. Pressure to ship combined with sleepy haze lead to us abandoning the AS5030 plan, which was OK because we could just oscillate between min and max to control the temperature. This is known as bang-bang control. Our testing convinced us that we could get the oven to stay within something like ±5°C, which was definitely accurate enough for what we wanted.

Whilst Elisabeth was busy mixing the cake, the control loop was set to heat the oven to 160°C and keep it there. The mixture was put in the oven, and the system was left to run for two and half hours. By “left” I mean that Johannes had a constant eye on it throughout the baking procedure — this was an important cake! All was well, and the cake baked successfully:

Bring On The Graphs!

Of course, whilst the cake was baking, we logged all the available data to disk (you can download it too… for some reason… find it at the bottom of this post). Below is a plot of the temperature of the oven read through the thermocouple throughout the whole process. The black line is the temperature, and the red shaded areas show where the knob was set to “max”. The knob was set to “min” in the white areas.

So you should be able to see from the above graph that the cake stayed around the temperature it should have throughout the whole process. There are a couple of more interesting events in there. If we zoom into the left-hand end of that graph (at the beginning of the baking experience), we can see what happened when the oven door was opened to put the cake in. Here you can see how quickly an oven will cool down when it is opened.

The other interesting bit is when the cake was removed from the oven. The control and logging stuff was turned off for a short period after the removal, so there’s a bit of a gap in the data as well:

So, now for some analysis of the above data. Here’s a histogram of the temperatures sampled during the periods of time when the oven door wasn’t open and the cake was installed:

Some stats from that:

So, a range of 11.5°C. Not bad for a night’s hacking. Elisabeth decorated the layer of cake, and transported it to the wedding, where it was combined with two other layers of cake baked by different people. News on the grapevine is that Elisabeth will be posting some photos on her blog any day now ;-)

There are two repositories of source related to this project:

And the data is available in this CSV file. The columns are as follows: time (seconds), temperature (°C), and whether the knob was on max or not. Enjoy.

Posted at 10:29 pm on Wednesday 28th December 2011

Student Robotics Teams

A quick link to the Student Robotics team websites that have popped up over the last few weeks:

Posted at 12:30 pm on Wednesday 26th October 2011
One Comment

Speeding up libkoki

The result of Chris’s internship was a library called libkoki. This library hasn’t really been released yet properly to the internets — we’re working on it, but various other things have had a higher priority, like shipping Student Robotics stuff etc. You’ll be hearing more about it soon… The quick summary is that libkoki is a library for finding visual markers in images.

So the important thing for Chris during his internship was to make libkoki robust. It is more robust than ARToolkit, which is a popular choice, in a few ways. Its markers feature CRCs in their patterns, which prevent things like windows and other rectangular structures being misidentified as markers. libkoki also uses adaptive thresholding to be able to detect markers in heterogeneous lighting conditions, such as when one side of a marker is better lit than another.

Since much of the focus has been on getting libkoki robust (and I think rightly so) and into a working state (which it is in), not much time has yet gone into getting it to perform well. We’re now shipping it as the main part of the Student Robotics vision API, and so we’re quite interested in getting it to perform well.


So to get things to perform well, one needs to know where to focus one’s efforts. In the past I’ve used gprof, which is OK, but has various limitations — like not being able to profile shared libraries, which is what libkoki is. I’d read some things about Linux perf recently, so I decided to try it out. (I’d link to it, but their wiki seems to have been down for a while…) In conclusion, perf is pretty amazing. Just running `perf top` results in an summary of which functions are currently taking up CPU time for the whole system, including userspace and kernel time. This is pretty amazing. No binary modifications required. As long as debug symbols are available, it’ll tell you function names.

We use libkoki on a BeagleBoard, which is an ARM Cortex-A8 platform. We ship a slightly old 2.6.32 kernel (it does the job, and developer hours are limited…), in which perf did not support ARM platforms. It turns out that our BeagleBoard’s kernel image has OProfile support compiled in. OProfile appears to do similar things to perf, so I used that.

So, I profiled some simple code that used libkoki and got the following profile from oprofile:

root@beagleboard:~# opreport -l pyenv/lib/libkoki.so 
Overflow stats not available
CPU: ARM V7 PMNC, speed 0 MHz (estimated)
Counted CPU_CYCLES events (Number of CPU cycles) with a unit mask of 0x00 (No unit mask) count 100000
samples  %        symbol name
1955     56.0493  koki_label_image
1018     29.1858  koki_threshold_adaptive
203       5.8200  koki_v4l_YUYV_frame_to_RGB_image

So, much time was spent in koki_label_image. This function splits up a black-and-white thresholded image into connected regions. On closer inspection of some annotated source code that oprofile gave me, it turned out that koki_label_image was spending a lot of time when the code discovered that two regions were in fact the same. After changing the way that libkoki goes about aliasing one region’s label with another, this changed to:

samples  %        symbol name
2326     40.8572  koki_threshold_adaptive
2208     38.7845  koki_label_image
427       7.5004  koki_v4l_YUYV_frame_to_RGB_image

So the 1.5 second processing time on the BeagleBoard has now been reduced to 0.78 seconds processing time. Fun times, but still some way to go!

Posted at 9:01 pm on Saturday 22nd October 2011

Dealing with whitespace

How to use whitespace is a contentious issue amongst programmers. If they spent their time arguing about software architecture instead, I’m sure software would be years ahead. I guess it’s because whitespace is a conceptually easy thing to argue about and be “creative” with… *sigh*.

When editing prose in things like LaTeX files, I use long lines. Each paragraph that I write is on a single line (avec emacs visual-line-mode). This is of course irritating if one just uses normal diff. The solution I used to use was dwdiff. However, `git diff` has recently developed a “--word-diff” option.

--word-diff turns an output like this:

-The cat sat on the hat.
+The cat sat on the mat.

into this:

The cat sat on the [-hat.-]{+mat.+}

(and if you have git’s color.ui config option set to true, it looks even better.)

Posted at 2:40 am on Friday 21st October 2011

Firefox + Gnome Keyring

I’m regularly irritated by the Firefox “enter your master password so you can continue to use the internet” prompt. Every once in a while I’d google for “firefox gnome keyring” to see if I could use gnome keyring to store my Firefox passwords in yet. Recently I struck gold.

I recommend this firefox extension. It makes your life less stressful.

Posted at 8:56 pm on Tuesday 13th September 2011

Playing with footprints and constraints

I’ve been using gEDA for a couple of years now. It’s a great project, and I think there’s a lot of ways that it can be improved. One of the biggest annoyances for me is the process for designing PCB footprints. The generally recommended way of creating footprints is to manually edit the absolute co-ordinates of its features in a text-editor. This works, but it leaves a lot to be desired. Making a mistake in calculations at the beginning of this process can lead to having to completely redo the entire thing. I have previously spent hours on simple footprints because of this. I think there are even stronger arguments for using a different approach. Let me explain…

If you pick up a component datasheet, you’ll find that it’ll give you a diagram that looks something like this:

In the datasheet there’s also a table in which you can look up what each dimension means. For example, “E” might be 2.8mm, in which case the diagram above that tells you that there are 2.8mm between the tips of the pins. It is easy for you to understand what that means.

Now imagine that you live in another dimension. In this alternative reality, the footprint diagrams in a component’s datasheet look something like this:

The arrows indicating the relative position of one feature from another have gone. Instead, the chip manufacturers in your universe decided that it would be a good idea to tell you the absolute co-ordinates of the features of the chip with respect to some origin. I’ve omitted the numbers in the above diagram, but I’m sure you can imagine that each red point could be associated with a co-ordinate that might be written in a table or written next to each point. To understand what the distance is between the tips of the pins is, you now have to do a small sum in your head.

It’s harder to understand the size of components in this alternative reality than our own. I think datasheets specify feature sizes as they do, with arrows indicating relative distances, because it is easier for people to perceive what this means. Despite this, many PCB tools require that one positions the parts of components in an absolute fashion. If it’s gEDA PCB you’re using then you need to know the co-ordinates of each of the footprint’s features, as presented in the alternative datasheet reality. If it’s the popular closed-source EAGLE that you’re using, then you graphically position the features on a grid in an absolute fashion — still essentially requiring the non-intuitive information from our parallel universe. Creating footprints in these CAD tools requires one to perform a load of manual calculations to translate what the datasheet says in one ‘language’ into that of the CAD tool.

The Beginnings of a Solution

Armed with my frustration at hours lost to lots of manual calculations, I decided to lose a few hours trying to create something that would make designing gEDA PCB footprints considerably easier, faster, and less error-prone. This git repository contains what I came up with:

git clone https://bitbucket.org/rspanton/pcbcons.git

It’s a functional proof-of-concept. Don’t expect things like API to remain constant, and certainly don’t expect me to maintain it in any way right now! This git repository contains two python modules which one imports:

#!/usr/bin/env python
import pcons, render_pcb

In this system there are two genres of thing:

  1. Features of footprints, such as pads and holes.
  2. Constraints that define how these are positioned relative to each other

Let’s create a new design and put a pad and a hole in it: (oh, and for convenience let’s add this import too)

from decimal import Decimal as D

#Create the design
des = pcons.Design()

p = des.add_pad( size = ( D("0.5"), D("1") ),
                   name = "1" )
h = des.add_hole( D("0.6") )

Now we’ve got the pad and the hole, we need to explain the position of one with respect to the other. So, we state that the bottom-left of the pad is aligned with the centre of the hole in the x-axis, and there’s 2mm between them in the y-axis:

des.cons += [
    pcons.FixedDist( 0, p.bl.x, h.pos.x ),
    pcons.FixedDist( D(2), p.bl.y, h.pos.y )

Since we need to provide some translation between the relative co-ordinate space and absolute space, we then fix an arbitrary point from our design to the origin:

# Set the bottom-left of the pad to be the origin
des.set_origin( p.bl )

Now all that’s left for us to do is to tell pcbcons to resolve all the constraints its been given, and then get it to render the output into a PCB footprint for us:


Running the resulting program gets the following output:

Element[0x00 "Thing" "" "Thing" 0 0 0 0 0 100 0x00000000]
	Pad[ 0.25mm -0.25mm 0.25mm -0.75mm 0.50mm 0.1mm 0.1mm "1" "1" "square"]
	Pin[0mm 2mm 0.6mm 0mm 0mm 0.6mm"" "" "hole"]

This is what a gEDA footprint file looks like. Let’s pipe that into a file and have a look at it in PCB:

OMG HOLY SMOKING PONIES! It did what we asked it to. The centre of that hole is 2mm below the bottom-left corner of the pad. Now it may seem like that was a lot of work for such a simple footprint. This is true. That footprint only contains a hole and a pad. When it comes to creating much more complex footprints, there are lots of constraints that one needs to specify. Furthermore, if one gets a number wrong early in the design stage, one just has to change one number rather than all of them.

So, this is essentially a quick hack at the moment. It’s got quite a few limitations. For example, pads cannot be rotated and must be aligned to the axes, silkscreen is not yet supported, nor are copper and soldermask clearances etc. I think the API could be cleaner in some ways that I don’t yet know. pcbcons already contains a helper function for creating a set of pads and the constraints needed to get them into a line. By attaching constraints to just one of the pads from that set, the whole set can be moved around easily.

My ultimate footprint design dream is for a graphical CAD tool in which one just draws the constraints onto the various footprint entities. Perhaps there is a future in creating a file-format that can spec’ the constraints between entities, then this can be either edited in a text-editor or graphically. Patches and discussion welcome.

Update 2016/03/30: Gitorious is no more, so I have moved the repository over to bitbucket, and updated the git clone URL provided above.

Posted at 3:36 am on Monday 8th August 2011
One Comment

Some gnome-shell bashing

So it seems this is a week for gnome-shell bashing. I’ve been using gnome-shell on my laptop since Fedora 15 came out. That’s just over two months now. Whilst I use my laptop a reasonable amount, it doesn’t represent all of my computer usage. There are three other desktops in my life — one in the lab, one at home, and another that one might call a “media centre” desktop in the lounge. The laptop and media PC got updated to F15 as soon as it arrived, and I upgraded the other desktops about a week ago. I think the only machine that gnome-shell has been close to pleasant to use on is the lounge PC. This isn’t really saying much though — it’s only really used to run MythFace, Firefox, and VLC. Certainly nothing like real desktop use, which involves much higher numbers of windows from a variety of applications, some of which will stay around for weeks as I move back and forth between the various activities that I do.

I find it terribly difficult to work out what is a bug and what is a feature in gnome-shell. Here are some things that perplex me:

I am finding it a lot slower to get things done under gnome-shell. There’s a lot of animation and stuff going on, and managing more than three windows is a challenging task. I think two months of daily use should be enough to adjust to something — especially something that claims to have been designed! Of course, gnome is a release-early release-often outfit, so hopefully it’ll improve in the future.

When I started using Linux about 8 years ago, I originally used KDE. A year and half later, around about the time I ditched the Windows partition from my PC, I switched to using Xfce. Gnome 2.x was around at that time, but it was quite slow to load, and didn’t really offer me anything extra that I needed. I could run all the same applications under Xfce as Gnome, I just didn’t have to wait as long for my laptop to boot. Then Gnome 2’s loading speeds got much faster, and so I switched over to that. I’ve used Gnome ever since then. Maybe it’s time to go back to Xfce until gnome-shell has had a significant overhaul, or has been completely replaced by something better.

Posted at 1:40 am on Friday 5th August 2011

Site by Rob Gilton. © 2008 - 2019