Geocoding Photos (Mac)

Published on Saturday, December 13, 2014

I've recently started using OSX (again), and am really enjoying it (again). One Windows-only tool that I found really useful is Geosetter, which allows you to add geo coordinates into photos. There don't appear to be any free geocoding tools that work to my satisfaction to do this, so the next best thing was geocode like you would using Linux. Here's how.

We're going to use the command line program ExifTool (by Phil Harvey) to extract coordinates from a gpx file and embed them in a directory of images.

Firstly, install exiftool using brew. Here's the command:

brew install exiftool

Copy the gpx files into your image directory and initiate the sync with the geotag flag:

exiftool -geotag=gpslog2014-12-10_212401.gpx ./

It is possible to also specify multiple gpx files (e.g. multiple day trip):

exiftool -geotag=gpslog2014-12-10_212401.gpx -geotag=gpslog2014-12-07_132315.gpx -geotag=gpslog2014-12-08_181318.gpx -geotag=gpslog2014-12-10_073811.gpx ./

And finally, you can include a time offset with the geosync flag. For instance, I had an 11-hour (39600 seconds) difference due to a timezone hiccup with my new camera, so we can get rid of that:

exiftool -geotag=gpslog2014-12-10_212401.gpx -geotag=gpslog2014-12-07_132315.gpx -geotag=gpslog2014-12-08_181318.gpx -geotag=gpslog2014-12-10_073811.gpx -geosync=39600 ./

It will process the images, renaming the original with an ".original" extension, and give you a report at the end:

1 directories scanned
193 image files updated
83 image files unchanged

If your camera is set to GMT, then put all the GPX files in the same directory as the photos to geocode, and do this:

TZ=GMT exiftool -geotag "*.gpx" *.jpg

For any additional manual geocoding I fallback on Picasa's Places GeoTag to add the coordinates.

If you have Lightroom, then try doing a search for a suitable ExifTool Lightroom plugin, as there seem to be a few.

Snap-CI Deploy to OpenShift

Published on Saturday, November 1, 2014

There are some wonderful CI / CD tools out there right now, and some of them have very usable free tiers. A few good examples include Shippable, Wercker, CloudBees, and Snap-CI. There are others, of course, but these all allow at least one private project to get started.

I have recently moved my projects to Snap, and my hack for the day needed to be deployed to OpenShift. Although Snap has built in integrations for some providers, no such integration currently exists for OpenShift (yet!). However, it takes less than 10 minutes to configure a Deploy step to OpenShift, and here's how.

Add SSH Keys

You will need to add your private SSH key (i.e. id_rsa) to Snap, and your public key to OpenShift (i.e. id_rsa.pub)

You can create the keys on another machine with the ssh-keygen command, and copy them into them into the corresponding places. In OpenShift, this is under Settings -> Add a new key. Once open, paste in the contents of your id_rsa.pub key





In Snap, edit your configuration, navigate to your Deploy step, and look for "Secure Files" and "Add new"





Get the content of the id_rsa key you generated earlier and post it in the content box. It should look like this, with "/var/go" as the file location, except with a real key:





Enable Git Push from Snap

If you've used ssh much, you are probably aware that that you can specify an identify file with the "-i" flag. The git command has no such flag, yet, but we can create a simple bash script that emulates this (script courtesy of Alvin Abad).

Add another New File in Snap and paste in the below script:

#!/bin/bash

# The MIT License (MIT)
# Copyright (c) 2013 Alvin Abad

if [ $# -eq 0 ]; then
    echo "Git wrapper script that can specify an ssh-key file
Usage:
    git.sh -i ssh-key-file git-command
    "
    exit 1
fi

# remove temporary file on exit
trap 'rm -f /tmp/.git_ssh.$$' 0

if [ "$1" = "-i" ]; then
    SSH_KEY=$2; shift; shift
    echo "ssh -i $SSH_KEY \$@" > /tmp/.git_ssh.$$
    chmod +x /tmp/.git_ssh.$$
    export GIT_SSH=/tmp/.git_ssh.$$
fi

# in case the git command is repeated
[ "$1" = "git" ] && shift

# Run the git command
git "$@"

Give this script the name "git.sh", set the file permissions to "0755", and update the file location to "/var/go".




Profit

With all these parts configured correctly you can add this single line to your Deploy script:

/var/go/git.sh -i /var/go/id_rsa push ssh://ABCDEFGHIJK123@example.yourdomain.rhcloud.com/~/git/example.git/


Re-run the build, check your logs, and it should deploy. Good luck!

Solved: slow build times from Dockerfiles with Python packages (pip)

Published on Wednesday, July 2, 2014

I have recently had the opportunity to begin exploring Docker, the currently hip way to build application containers, and I generally like it. It feels a bit like using Xen back in 2005, when you still had to download it from cl.cam.ac.uk, but there is huge momentum right now. I like the idea of breaking down each component of your application into unique services and bundling them up - it seems clean. The next year is going to be very interesting with Docker, as I am especially looking forward to seeing how Google's App Engine allows Docker usage, or what's in store for the likes of Flynn, Deis, CoreOS, or Stackdock.

One element I had been frustrated with is the build time of my image to host a Django application I'm working on. I kept hearing these crazy low rebuild times, but my container was taking ages to rebuild. I noticed that it was cached up until I re-added my code, and then pip would reinstall all my packages.

It appeared as though anything after I used ADD for my code was being rebuilt, and reading online seemed to confirm this. Most of the items were very quick, e.g. "EXPOSE 80", but then it hit "RUN pip -r requirements.txt"

There are various documented ways around this, from two Dockerfiles to just using packaged libraries. However, I found it easier to just use multiple ADD statements, and the good Docker folks have added caching for them. The idea is to ADD your requirements first, then RUN pip, and then ADD your code. This will mean that any code changes don't invalidate the pip cache.

For instance, I had something (abbreviated snippet) like this:

# Set the base image to Ubuntu
FROM ubuntu:14.04

# Update the sources list
RUN apt-get update
RUN apt-get upgrade -y

# Install basic applications
RUN apt-get install -y build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip postgresql-client

# Copy the application folder inside the container
ADD . /app

# Get pip to download and install requirements:
RUN pip install -r /app/requirements.txt

# Expose ports
EXPOSE 80 8000

# Set the default directory where CMD will execute
WORKDIR /app

VOLUME [/app]

CMD ["sh", "/app/run.sh"]

And it rebuild pip whenever the code changes. Just add the requirements and move the RUN pip line:

# Set the base image to Ubuntu
FROM ubuntu:14.04

# Update the sources list
RUN apt-get update
RUN apt-get upgrade -y

# Install basic applications
RUN apt-get install -y build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip postgresql-client

ADD requirements.txt /app/requirements.txt

# Get pip to download and install requirements:
RUN pip install -r /app/requirements.txt

# Copy the application folder inside the container
ADD . /app

# Expose ports
EXPOSE 80 8000

# Set the default directory where CMD will execute
WORKDIR /app

VOLUME [/app]

CMD ["sh", "/app/run.sh"]

I feel a bit awkward for having missed something that must be so obvious, so hopefully this can help somebody in a similar situation.

TLS Module In SaltStack Not Available (Fixed)

Published on Wednesday, May 7, 2014

I was trying to install HALite, the WebUI for SaltStack, using the provided instructions. However, I kept getting the following errors when trying to create the certificates using Salt:

'tls.create_ca_signed_cert' is not available.
'tls.create_ca' is not available.


Basically, the 'tls' module in Salt simply didn't appear to work. The reason for this is detailed on intothesaltmind.org:

Note: Use of the tls module within Salt requires the pyopenssl python extension.

That makes sense. We can fix this with something like:

apt-get install libffi-dev
pip install -U pyOpenSSL
/etc/init.d/salt-minion restart


Or, better yet, with Salt alone:

salt '*' cmd.run 'apt-get install libffi-dev'
salt '*' pip.install pyOpenSSL
salt '*' cmd.run "service salt-minion restart"


The commands to create the PKI key should work now:

Created Private Key: "/etc/pki/salt/salt_ca_cert.key." Created CA "salt": "/etc/pki/salt/salt_ca_cert.crt."

Beers of Myanmar

Published on Wednesday, April 30, 2014



While in Myanmar on a recent trip I did a brief taste comparison of the three main beers available in most supermarkets.


Andaman - Not to my taste, perhaps like XXXX, VB, Natural Light, or a light Steel Reserve.

Myanmar - Quite refreshing, a bit like similar beers in the region, e.g. Chang, Tiger, or Laos Beer.

ABC - An extra stout (and 8%!) in such a hot country? That's a surprise.

Error opening /dev/sda: No medium found

Published on Saturday, March 1, 2014

I have had this issue before, solved it, and had it again.

Let's say you plug in a USB drive into a Linux machine, and try to access it (mount it, partition it with fdisk/parted, or format it), and you get the error

Error opening /dev/sda: No medium found


Naturally the first thing you will do is ensure that it appeared when you plugged it in, so you run 'dmesg' and get:

sd 2:0:0:0: [sda] 125045424 512-byte logical blocks: (64.0 GB/59.6 GiB)


And it appears in /dev

Computer:~ $ ls /dev/sd*
/dev/sda
Computer:~ $


Now what? Here's what has bitten me twice: make sure the drive has enough power. Let's say you mounted a 2.5" USB drive into a Raspberry Pi. The Pi probably doesn't have enough current to power the drive, but it does have enough to make the drive recognisable. Or, if you are like me, the USB charger powering the drive is faulty, so even though it has power, it doesn't have enough.

The next troubleshooting step should be obvious: give the drive enough power to completely spin up.

Continuous Flow Through Worm Bin

Published on Sunday, January 5, 2014

Status:

A few months ago we decided we wanted a worm bin, as we were eating a lot of vegetables, and tossing away bits that weren't used. We were also buying soil for our plants, so it made sense to try to turn one into another.

One of our friends gave us some worms from her compost - no idea what kind - and I build an experimental CFT worm bin (sample plans). We harvested once at about two months, but I don't think it was quite ready. We'll keep experimenting.