Jay Caines-Gooby

Back in 5 mins

Archived Posts

Displaying posts 11 - 14 of 14

Rendering bitmaps from PDFs at non-native sizes

Tuesday November 17, 2009 @ 04:32 PM (GMT)

Charanga’s primary instrumental teaching resources use a synchronised score and animated instrument to indicate which notes are played during the piece.

Each of the resources begins life as a Sibelius arrangement, from which we use both the midi and score output. We’ve got close to a thousand of these interactive pieces and with any job of this size, scriptable tools can really help speed up the production process.

Pete, our musical arranger, asked me if there was a quicker way for him to generate the PNGs needed by the interactive tool. Up to now he’d been exporting them directly from Sibelius.

Sibelius can batch export PDFs, so converting from these was definitely the way to go. The main problem with a straight imagemagick convert command, like:

  convert score.pdf score.png

is that the native size of the PDF probably isn’t the correct size for the PNG, and when you try and force the correct size with a resize:

  convert -resize 506x517 score.pdf score.png

You end up with a poor bitmapped image, because you might be scaling up from an effective smaller size; e.g. in my score example the native size of the PDF is only 271×276 and I’m trying to go to twice the size [506×517]. Hence the poor quality of the resulting PNG.

What’s required is to up the size of the PDF prior to the convert taking place; it is a vector format after all, so there’ll be no loss of quality with a larger image. A simple way to do this is to up the DPI of the PDF. ImageMagick will default to 72DPI unless told otherwise. Crank up the density (DPI) for a bigger image:

  convert -density 600 -resize 506x517 score.pdf score.png

And the resulting PNG is much more acceptible:

Tweeting Brighton and Hove real-time bus departures

Tweet bird courtesy of Gopal Raju, bus photo Brighton & Hove Bus and Coach Company

Brighton & Hove have real-time departure boards showing when the next bus is due at most of their city-centre bus stops. There’s also good Google Maps integration, showing bus stops and departure times.

A recent addition to the city’s transport infrastructure, is the ability to text a bus stop code from your mobile and to receive back, by text, a list of the buses due at that stop. There’s a 25p charge for the service however, and as I often just want to quickly check what time I should leave work or home to catch the next bus, I thought I’d write a Twitter bot that I could tweet for the information instead.

There’s an excellent ruby library for writing Twitter bots called twibot that uses a Sinatra-inspired DSL for matching incoming tweets against routes that should respond to the tweet.

Data, data, data

The biggest hurdle in getting @bustweet up and running was data. I needed to get a list of all the bus top sms text codes – e.g. brimdgm – and the corresponding bus routes that serviced that stop. I knew it had to be there somewhere, because the slippy maps show you the bus stop, its text code and a list of buses that stop there. Time to dig out Firebug

It took me a couple of hours, but I was pretty certain that I could build something.

It turns out that the text codes are known as Naptan Codes, and although the codes themselves are assigned by the local authority, they’re a nationwide initiative and defacto standard. Indeed, I’m certainly not the first person to have a go at using the Naptan data; James Wheare built LiveBus.org from the various local authority transport data that’s available.

And once you know where to look, you can get a whole pile of JSON (wrapped up as Javascript) sent to you. JSON for services:

"var response = 
        serviceId: \"29\",
         serviceName: \"11X\",
         serviceDescription: \"Hove Town Hall - Kings House - Thistle Hotel\",
         serviceAbbreviatedName: \"\",
                routeId: \"65\",
                 routeName: \"11X Hove - Brighton\"
                routeId: \"66\",
                 routeName: \"11X Brighton - Hove\"

And routes and stops:

"var response = 
	    routeid: \"65\",
	            stopId: \"6915\",
	             stopName: \"Hove Town Hall\",
	             operatorsCode1: \"06915\",
	             operatorsCode2: \"06915\",
	             gpsStopName: \"Hove Town Hall N\",
	             naptanCode: \"brimpmg\",
	             Lat: \"50.828602\",
	             Lng: \"-.170563\"
	            stopId: \"6905\",
	             stopName: \"Kings House\",
	             operatorsCode1: \"06905\",
	             operatorsCode2: \"06905\",
	             gpsStopName: \"Kings House\",
	             naptanCode: \"brimpjw\",
	             Lat: \"50.824944\",
	             Lng: \"-.168752\"
	            stopId: \"6079\",
	             stopName: \"Brighton Centre\",
	             operatorsCode1: \"06079\",
	             operatorsCode2: \"06079\",
	             gpsStopName: \"Brighton Centre\",
	             naptanCode: \"briamta\",
	             Lat: \"50.820858\",
	             Lng: \"-.145890\"
	            stopId: \"6913\",
	             stopName: \"Thistle Hotel\",
	             operatorsCode1: \"06913\",
	             operatorsCode2: \"06913\",
	             gpsStopName: \"Thistle Hotel\",
	             naptanCode: \"brimpga\",
	             Lat: \"50.819883\",
	             Lng: \"-.140137\"

I came up with a nice simple schema of three tables; services, routes and stops and populated these with data via a bit of shell scripted curling. As you can see from above, the JSON needed cleaning up too; I wanted to parse it with Ruby not Javascript, and even then, the keynames weren’t validly quoted. A quick pass through sed fixed that and meant I now had details on the 101 bus routes and the 7947 bus stops (who knew there were so many!) that service them.

The final part of the puzzle was getting the actual real-time data for a particular stop. The web-based departure boards are nice and self explanatory via querystring parameters, so I just need to screen scrape the result of any tweeted enquiry.

First I convert the text name for the stop to the stopId value that the departure board needs; a quick join across the database schema, and the resulting web page gets scraped with hpricot and the resulting bus times are tweeted back to the original enquirer.

Give it try! Tweet @bustweet with the text name of a bus stop, e.g. @bustweet brimdmt or with an optional route number to e.g. @bustweet brimdmt 5b

It’s still a little brittle and may well be up and down in the next few days, so please be a little patient.

If you’ve got any ideas for ways to enhance it, feel free drop me a line at jay@gooby.org or @jaygooby or leave a comment below.

If you’ve had problems with wp-super-cache not writing cached files into the cache/supercache/www.yoursite.com folder when running under nginx, then I think I’ve found the reason why…

There are actually two different problems here, but the first is the nginx-specific one and is a side-effect of your nginx configuration.

If you’re using rewrites so you have friendly URLs like /nginx-wp-super-cache-not-writing-cache-files-solved rather than index.php?p= and if like me you grabbed the nginx directives off the web somehere, you’ve probably got some lines like this:

  if (!-e $request_filename) {
    rewrite ^.+/?(/wp-.*) $1 last;
    rewrite ^.+/?(/.*\.php)$ $1 last;
    rewrite ^(.+)$ /index.php?q=$1 last;

The last rewrite directive sets a querystring parameter called q with the friendly URL as its value. Deep in the guts of wp-super-cache there’s a test for any querystring parameters, and if there are, it won’t super-cache the page. Doh! So what to do?

Change line 250 (wp-cache-phase2.php in wp-super-cache version from:

if( ! empty($_GET) || is_feed() || ( $super_cache_enabled == true && is_dir( substr( $supercachedir, 0, -1 ) . '.disabled' ) ) ) {


if( (count($_GET,1) > 1) || is_feed() || ( $super_cache_enabled == true && is_dir( substr( $supercachedir, 0, -1 ) . '.disabled' ) ) ) {

So rather than checking for an empty querystring, we check for more than 1 querystring parameter, because due to our nginx config, we’ll always have one called q with a value of the current URL.

The second problem, and this applies regardless of whether you’re using nginx, apache or any other httpd, is cookies.

We all know that if you’re logged in to Wordpress, wp-super-cache won’t save or serve you super-cached files (it just uses the regular wp-cache files instead). But, during my digging, I also found out that even if you visit any areas of your site that require you to login (i.e. first redirect you to wp-login.php), and then you don’t login, you still will have been cookied with wordpress_test_cookie containing a value of WP Cookie check.

Again, due to the way wp-super-cache works, this will now be enough to stop it saving or serving you super-cached files, until you delete this cookie.

You can either remember to keep deleting this cookie, or you could just patch wp-cache-phase2.php as follows. Change lines 267-270 from:

if( $super_cache_enabled ) {
	$user_info = wp_cache_get_cookies_values();
	$do_cache = apply_filters( 'do_createsupercache', $user_info );
	if( $user_info == '' || $do_cache === true ) {


if( $super_cache_enabled ) {
	$user_info = wp_cache_get_cookies_values();
	if ($user_info == 'WP Cookie Check') $user_info = "";
	$do_cache = apply_filters( 'do_createsupercache', $user_info );
	if( $user_info == '' || $do_cache === true ) {

Better late than never

Thursday March 19, 2009 @ 03:06 PM (GMT)

Since 1999 this domain has had the title “Back in 5 mins”, so here, a whole decade later, is the inaugural post.

At the start of Dune, Frank Herbert says through his narrator Princess Irulan that “A beginning is the time for taking the most delicate care that the balances are correct”. If I tried to take the most delicate care with this post, let alone the site, I think another decade would pass before anything happened.

So as much as I’d like to write a manifesto, put some nice design in place and make everything as POSH as possible, I’m just going to worry about the content, stupid, and sort the rest, later.

To keep me honest, and to make sure I keep my promise with myself not to let this blog stagnate after my first flush of interest, I’m going to commit to a few things:

  1. I’ll be posting about some personal sites I’m just about to start building: BuildYourSiteRight, FamilyAddressBook (no links, because they’re just tumbleweed at the moment).
  2. As well as personal projects, I want to talk about some recent Wordpress work that uses a mother-lode of plugins and has some rather excellent Google Apps and Google Calendar integration and well as some sick iCal aggregation
  3. A quick post about using Dreamhost (Dreamhost, I think I love you) to host private git repositories
  4. Finally, I really ought to write about my adventures in Ruby, seeing as it’s pretty much all I do nowadays

And what blog post wouldn’t be complete without taking up an internet meme? So, to keep up with the Jones’ (so to speak) and to ride the wave of Moleitausian excitement allow me to introduce you to the things that I’ve made this past week or so…

thoth-plugins on github. Rather than take the obvious route of running this blog on Wordpress, I’ve opted to try a different blog engine, Thoth. I heart Wordpress, but it’s an abusive relationship, because once I see that PHP I can’t help myself. I wade on in, a bit of html there, a quick <?php> tag here, and I’m drunk on bad habits. The next most obvious route would be to write my own blogging engine, after all, the seminal Rails screencast is a blog in ten minutes, but that way madness lies – along with anther ten year wait for the first post – as I get sidetracked by all that an under-specified software project entails.

My giddy excitement at finally joining Github meant that I had a quick look around for a blogging system that used Ruby and let you author both posts and pages. I know, I know. I could have just used Mephisto or Typo, but I wanted something simple, that I could understand and customise easily. Thoth seemed to fit the bill nicely. And so that’s what’s driving Back in 5 Mins for the moment.

For recent projects I’ve been used to running a mongrel cluster behind an Nginx front-end, but Rob, has been tempting me with Enterprise Ruby and Phusion Passenger. Thoth got a patch which let it work with Rack, and so after a fresh install of this slice I’m now a happy user.

The first thing I made was a plugin for Thoth, which lets you set up redirects.

As well as some digital making, I’ve made some physical stuff too. I’m attending Scotland on Rails and I haven’t got around to ordering any Moo cards, so I thought I’d have a bash at making my own, what with the ubiquitous message and all.

A quick trip to WHSmith and and bit of moveable type action, and I’m sorted on the card front.

Copyright © 2011 Jay Caines-Gooby. All rights reserved.
Powered by Thoth.