Overland Track Lighter Pack Tips

Background

My hiking companion and I recently completed the Overland Track in Tasmania, and they posted a picture of our packs on a related group on Facebook. There was an overwhelming response, ranging from good job! to you’re a liar or you surely didn’t have a tent or you must be on a tour and didn’t bring food.

I can understand the skepticism. Upon inspecting what people brought, and never used, there is definitely a preference for people to pack their fears. Considering this track seemed to be the first time many people have done a multi-day backpacking trip, there were a lot of things they likely would not pack after gaining a little more experience.

The consequences were very real. Most people had knee or foot problems by the time they reached Narcissus Hut, and I was one of only a few people able to hike out (~18km) when the ferry was cancelled (made it in about 3hr 45min and made my transport). One of my more lasting memories from the hike was stumbling on a couple hiking and one of the people being unable to cross a fallen tree because their bag was too heavy, and their partner had to help push them up.

The track was likely especially scary for newcomers given the weather we encountered was “the worst so far this season”, according to our track transport. A week before we went the weather was supposed to be six days of glorious 5 - 10 C temperatures with only a little drizzle. The night before we flew out of Sydney it was forecasted to snow 1 - 2 mm one of the days. The actual weather was non-stop rain or snow, temperatures ranging from -2 to 3 C, and we only saw blue sky once. Once. I don’t remember ever seeing the sun from start-to-finish. It felt like we spent more time walking in streams or mud than on actual dry soil.

The Overland Track. Plan ahead and say goodbye to mobile reception.

Given we encountered just about the worst the trail could throw at us outside of winter, how do we know we brought just the right amount of stuff?

  1. We were one of only two parties from our van transport that even stayed in our tent
  2. We never shivered a single time, nor did we think we were ever in any danger
  3. The ferry was cancelled when we rolled in to Narcissus Hut, so 16 extra people had to stay overnight; about five of them didn’t have enough food, and we were the ones giving them food because we had plenty of food remaining (that said, we were also 1 1/2 days ahead of schedule)
  4. People routinely borrowed our lighter, as the piezo on their stove was broken
  5. We had hot meals every night, coffee twice a day, and still had fuel left over
  6. We let three people charge their phones at NH so they could sort out travel arrangements, as they didn’t have a spare power bank

Suggestion From What We Saw

Here’s a list of things that we saw people bring, but with suggested substitutes that would reduce overall weight while not reducing safety or comfort. You might think “that’s only 50g savings”, but it all adds up. We had by far the lightest bags with a total pack weight of around 8 kg (with 1 litre of water), and the heaviest in our van was 23kg. Most were around 15kg.

Camp Shoes

Seen Brought: 2nd pair of sneakers for camp shoes

Better: Crocs/Flip-Flops or hotel slippers

Best: Plastic bags

A lot of people brought a 2nd pair of sneakers just for walking around the huts. Many people were a little more wise and brought a lighter pair of Crocs (good for socks, but mine weight 349g) or flip-flops (mine weigh 155g), but I’d argue that hotel slippers (mine weigh just 39g) serve the same purpose. Or bring two bread bags and when you get to the hut take off your wet socks, put them immediately on or near the heater, put on your sleep socks, and put on bread bags on top of them. You can then wear your wet shoes without getting your sleep socks wet.

Fresh Fruit / Veggies

Seen Brought: Fresh Fruit/Veggies

Better: Dried fruit / dehydrated veggies

Best: None

Feel free to bring a fresh apple for lunch the first day, but fresh fruit is extremely heavy for the calories they provide. We saw people four days in giving away cucumbers / zucchini. To put this in perspective, 100g of cucumbers have 72 calories vs. 100g of peanut m&m have 516 calories.

Tinned Food

Seen Brought: Cans of tuna

Better: Starkist tuna packets

Best: Jerkey or biltong or just nuts or peanut butter m&m

On my first multi-day hike several years ago with this hiking companion my shopping instructions where: if it has to cook, then it needs to be able to be done in less than 3 minutes, and no cans or jars of anything. Bringing in a can of tuna, which isn’t even that calorie dense to begin with, means you have to keep carrying that tin your entire hike. If you must bring in a tuna packet, but do a little research, as you can save significant weight by paying attention to the food you bring. Please see Skurka’s post for some overall tips, and then over at Greenbelly for some actual food/weight breakdown. Please see below for our food breakdown. We deviated a little bit in what we ended up buying in TAS (e.g. no banana chips), but it was plenty. If you have the time, and like planning, then consider doing the same. Alternatively, and this is the guideline I follow if I do not intend to do much planning, then try to buy food at Woolies or Coles that is as close to 2000kJ per 100g as possible.

Initial food planning for two

Tools / Kitchen Stuff

Seen Brought: Small cast iron skillet, hunting knives

Better: Not a cast iron skillet

A lot of people were cooking pretty elaborate meals, which is pretty impressive. They also brought four pans and three canisters of fuel. I hesitate making recommendations on food, but I’d probably suggest getting some dehydrated meals from Snowys a few weeks in advance, and you won’t need all those pots and pans. Another nice thing is you can use the package as a container, so one less bowl to bring and clean. Bring one spoon with a long handle (sporks might sound nice, but if you’re eating cous cous or something small, then you can’t easily scoop it up, and I would hesitate that you might pop a hole in a dehydrated bag).

Duplicate Clothing / Cotton

Seen Brought: “I’m wearing four fleeces” or duplicates of every item

Best: Skurka’s Core 13

TAS Parks provides a list of minimum gear that you need to bring, but I don’t think you need to bring more than what is on it. Even better would be to read the article by Andrew Skurka on the Core 13 items he suggests you bring. We surprisingly saw quite a bit of cotton shirts / pants being worn, which was a surprise.

Big Trowel

Seen Brought: Metal garden trowels

Better: Deuce of spaces

Best: Nada

I typically carry a ‘deuce of spades’ on any overnight, but in the case of the Overland, if doing it again, I’d probably skip bringing it. There are toilets at every site, and the ground was pretty moist, so digging a cat hold wouldn’t be a problem.

Water

Seen Brought: People carrying 5L of water

Better: 2L

Best: 1L + Sawyer Squeeze (filter)

When we went there was water everywhere, like, it felt like most of our time was walking in streams. If you aren’t in a stream, then you are no more than 1km away from crossing some stream. Our van driver / ex-guide said she didn’t filter often, but for some reason I have a fear of water, so I tend to always filter unless high up in the mountains. I brought a 1L balance water bottle and a filter, and never once needed more than that. Most of the time I just filtered at the huts and filled up there.

First Aid Kits

Seen Brought: 1kg kits from Big W

Better: make it yourself

It seems like quite a few people thought “huh, I need a FAK, I’ll get the next one I see” and end up with something that has a million bandaids and big gauze pads, but nothing you actually need. You can see what is in my FAK, which probably still has too many wipes, but I can deal with the most common issues: blisters, and soreness. It weights about 60g / 2oz.

Books

Seen Brought: several hardback books

Better: Kindles

Best: skip books and chat with people or Audible

One lady opened her bag and pulled out multiple books, read for 20 minutes, then chatted with people. Bring a kindle. Or realise it is only 4 - 6 days, and leave the books at home and chat with people. I tend to load up my phone with books on Audible or podcasts.

We Wish We Brought…

You can read above that we had planned more for the experience than most people, and we have done several other multi-day hikes previous. I am immensely glad I read one of Ray Jardin’s books back in ~2002 to learn how to prepare for backpacking and stop packing my fears.

There wasn’t much we wish we would have brought, except for perhaps some type of mittens that would have blocked the wind. When hiking on the ridges the temperature ended up dropping significantly, and when combined with the wind, it made my hands quite cold. We kept moving and that kept us generating heat, but stopping on a ridge would have been uncomfortable.

IoT Foray with Sonoff S20 / IFTTT / Lambda / CloudMQTT

I recently purchased an Echo from Amazon, and we were contemplating how else to better integrate it with our somewhat minimalistic home. I thought it would be interesting to get it to link to a WiFi-enabled power outlet, but unfortunately they are pretty expensive in Australia.

Then I stumbled across the Sonoff devices by Itead, and learned that they were somewhat hackable via a custom firmware. Coincidentally I received the two devices on the same day my daughter was off sick, so when she had her nap, I got hacking.

The first bottleneck was discovering that the units I received did not have any headers. A little quick soldering later, and we had headers.

No headers mom :(

Now we have headers!

A few notes of warning: the $2 programmer I got from AliExpress has 3.3v and 5v, but the output is 5v. I’m glad I measured it with my multimeter, and used a random 3.3v breadboard supply instead.

In hindsight I wish I had just purchased the FTDI programmer from Itead. It looks pretty neat.

After following the rest of the Tasmoto hardware instructions, and then the PlatformIO instructions, I was able to successfully flash both my units with the custom firmware.

I then created a Lambda function that sends a signal to CloudMQTT, and connected the two devices.

Voila!

Geocoding Photos (Mac)

I’ve recently started using OSX (again), and am really enjoying it (again). One Windows-only tool that I found really useful is Geosetter, which allows you to add geo coordinates into photos. There don’t appear to be any free geocoding tools that work to my satisfaction to do this, so the next best thing was geocode like you would using Linux. Here’s how.

We’re going to use the command line program ExifTool (by Phil Harvey) to extract coordinates from a gpx file and embed them in a directory of images.

Firstly, install exiftool using brew. Here’s the command:

brew install exiftool

Copy the gpx files into your image directory and initiate the sync with the geotag flag:

exiftool -geotag=gpslog2014-12-10_212401.gpx ./

It is possible to also specify multiple gpx files (e.g. multiple day trip):

exiftool -geotag=gpslog2014-12-10_212401.gpx -geotag=gpslog2014-12-07_132315.gpx -geotag=gpslog2014-12-08_181318.gpx -geotag=gpslog2014-12-10_073811.gpx ./

And finally, you can include a time offset with the geosync flag. For instance, I had an 11-hour (39600 seconds) difference due to a timezone hiccup with my new camera, so we can get rid of that:

exiftool -geotag=gpslog2014-12-10_212401.gpx -geotag=gpslog2014-12-07_132315.gpx -geotag=gpslog2014-12-08_181318.gpx -geotag=gpslog2014-12-10_073811.gpx -geosync=39600 ./

It will process the images, renaming the original with an “.original” extension, and give you a report at the end:

1 directories scanned
193 image files updated
83 image files unchanged

If your camera is set to GMT, then put all the GPX files in the same directory as the photos to geocode, and do this:

TZ=GMT exiftool -geotag "*.gpx" *.jpg

For any additional manual geocoding I fallback on Picasa’s Places GeoTag to add the coordinates.

If you have Lightroom, then try doing a search for a suitable ExifTool Lightroom plugin, as there seem to be a few.

Snap-CI Deploy to OpenShift

There are some wonderful CI / CD tools out there right now, and some of them have very usable free tiers. A few good examples include Shippable, Wercker, CloudBees, and Snap-CI. There are others, of course, but these all allow at least one private project to get started.

I have recently moved my projects to Snap, and my hack for the day needed to be deployed to OpenShift. Although Snap has built in integrations for some providers, no such integration currently exists for OpenShift (yet!). However, it takes less than 10 minutes to configure a Deploy step to OpenShift, and here’s how.

Add SSH Keys
You will need to add your private SSH key (i.e. id_rsa) to Snap, and your public key to OpenShift (i.e. id_rsa.pub)

You can create the keys on another machine with the ssh-keygen command, and copy them into them into the corresponding places. In OpenShift, this is under Settings -> Add a new key. Once open, paste in the contents of your id_rsa.pub key

In Snap, edit your configuration, navigate to your Deploy step, and look for “Secure Files” and “Add new”

Get the content of the id_rsa key you generated earlier and post it in the content box. It should look like this, with “/var/go” as the file location, except with a real key:

Enable Git Push from Snap

If you’ve used ssh much, you are probably aware that that you can specify an identify file with the “-i” flag. The git command has no such flag, yet, but we can create a simple bash script that emulates this (script courtesy of Alvin Abad).

Add another New File in Snap and paste in the below script:

#!/bin/bash
 
# The MIT License (MIT)
# Copyright (c) 2013 Alvin Abad
 
if [ $# -eq 0 ]; then
    echo "Git wrapper script that can specify an ssh-key file
Usage:
    git.sh -i ssh-key-file git-command
    "
    exit 1
fi
 
# remove temporary file on exit
trap 'rm -f /tmp/.git_ssh.$$' 0
 
if [ "$1" = "-i" ]; then
    SSH_KEY=$2; shift; shift
    echo "ssh -i $SSH_KEY \$@" > /tmp/.git_ssh.$$
    chmod +x /tmp/.git_ssh.$$
    export GIT_SSH=/tmp/.git_ssh.$$
fi
 
# in case the git command is repeated
[ "$1" = "git" ] && shift
 
# Run the git command
git "$@"

Give this script the name “git.sh”, set the file permissions to “0755”, and update the file location to “/var/go”.

Profit
With all these parts configured correctly you can add this single line to your Deploy script:

/var/go/git.sh -i /var/go/id_rsa push ssh://[email protected]/~/git/example.git/

Re-run the build, check your logs, and it should deploy. Good luck!

Solved: slow build times from Dockerfiles with Python packages (pip)

I have recently had the opportunity to begin exploring Docker, the currently hip way to build application containers, and I generally like it. It feels a bit like using Xen back in 2005, when you still had to download it from cl.cam.ac.uk, but there is huge momentum right now. I like the idea of breaking down each component of your application into unique services and bundling them up - it seems clean. The next year is going to be very interesting with Docker, as I am especially looking forward to seeing how Google’s App Engine allows Docker usage, or what’s in store for the likes of Flynn, Deis, CoreOS, or Stackdock.

One element I had been frustrated with is the build time of my image to host a Django application I’m working on. I kept hearing these crazy low rebuild times, but my container was taking ages to rebuild. I noticed that it was cached up until I re-added my code, and then pip would reinstall all my packages.

It appeared as though anything after I used ADD for my code was being rebuilt, and reading online seemed to confirm this. Most of the items were very quick, e.g. “EXPOSE 80”, but then it hit “RUN pip -r requirements.txt”

There are various documented ways around this, from two Dockerfiles to just using packaged libraries. However, I found it easier to just use multiple ADD statements, and the good Docker folks have added caching for them. The idea is to ADD your requirements first, then RUN pip, and then ADD your code. This will mean that any code changes don’t invalidate the pip cache.

For instance, I had something (abbreviated snippet) like this:

# Set the base image to Ubuntu
FROM ubuntu:14.04

# Update the sources list
RUN apt-get update
RUN apt-get upgrade -y

# Install basic applications
RUN apt-get install -y build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip postgresql-client

# Copy the application folder inside the container
ADD . /app

# Get pip to download and install requirements:
RUN pip install -r /app/requirements.txt

# Expose ports
EXPOSE 80 8000

# Set the default directory where CMD will execute
WORKDIR /app

VOLUME [/app]

CMD ["sh", "/app/run.sh"]

And it rebuild pip whenever the code changes. Just add the requirements and move the RUN pip line:

# Set the base image to Ubuntu
FROM ubuntu:14.04

# Update the sources list
RUN apt-get update
RUN apt-get upgrade -y

# Install basic applications
RUN apt-get install -y build-essential

# Install Python and Basic Python Tools
RUN apt-get install -y python python-dev python-distribute python-pip postgresql-client

ADD requirements.txt /app/requirements.txt

# Get pip to download and install requirements:
RUN pip install -r /app/requirements.txt

# Copy the application folder inside the container
ADD . /app

# Expose ports
EXPOSE 80 8000

# Set the default directory where CMD will execute
WORKDIR /app

VOLUME [/app]

CMD ["sh", "/app/run.sh"]

I feel a bit awkward for having missed something that must be so obvious, so hopefully this can help somebody in a similar situation.

TLS Module In SaltStack Not Available (Fixed)

I was trying to install HALite, the WebUI for SaltStack, using the provided instructions. However, I kept getting the following errors when trying to create the certificates using Salt:

'tls.create_ca_signed_cert' is not available.  
'tls.create_ca' is not available.

Basically, the ’tls’ module in Salt simply didn’t appear to work. The reason for this is detailed on intothesaltmind.org:

Note: Use of the tls module within Salt requires the pyopenssl python extension.

That makes sense. We can fix this with something like:

apt-get install libffi-dev  
pip install -U pyOpenSSL  
/etc/init.d/salt-minion restart

Or, better yet, with Salt alone:

salt '*' cmd.run 'apt-get install libffi-dev'  
salt '*' pip.install pyOpenSSL  
salt '*' cmd.run "service salt-minion restart"

The commands to create the PKI key should work now:

Created Private Key: "/etc/pki/salt/salt_ca_cert.key." Created CA "salt": "/etc/pki/salt/salt_ca_cert.crt."  

Beers of Myanmar

While in Myanmar on a recent trip I did a brief taste comparison of the three main beers available in most supermarkets.

Andaman - Not to my taste, perhaps like XXXX, VB, Natural Light, or a light Steel Reserve.
Myanmar - Quite refreshing, a bit like similar beers in the region, e.g. Chang, Tiger, or Laos Beer.

ABC - An extra stout (and 8%!) in such a hot country? That’s a surprise.

Error opening /dev/sda: No medium found

I have had this issue before, solved it, and had it again.

Let’s say you plug in a USB drive into a Linux machine, and try to access it (mount it, partition it with fdisk/parted, or format it), and you get the error

Error opening /dev/sda: No medium found  

Naturally the first thing you will do is ensure that it appeared when you plugged it in, so you run ‘dmesg’ and get:

sd 2:0:0:0: [sda] 125045424 512-byte logical blocks: (64.0 GB/59.6 GiB)  

And it appears in /dev

Computer:~ $ ls /dev/sd*  
/dev/sda  
Computer:~ $  

Now what? Here’s what has bitten me twice: make sure the drive has enough power. Let’s say you mounted a 2.5" USB drive into a Raspberry Pi. The Pi probably doesn’t have enough current to power the drive, but it does have enough to make the drive recognisable. Or, if you are like me, the USB charger powering the drive is faulty, so even though it has power, it doesn’t have enough.

The next troubleshooting step should be obvious: give the drive enough power to completely spin up.

Continuous Flow Through Worm Bin

Status: ✅

A few months ago we decided we wanted a worm bin, as we were eating a lot of vegetables, and tossing away bits that weren’t used. We were also buying soil for our plants, so it made sense to try to turn one into another.

One of our friends gave us some worms from her compost - no idea what kind - and I build an experimental CFT worm bin (sample plans). We harvested once at about two months, but I don’t think it was quite ready. We’ll keep experimenting.

Free Splunk Hosting

I first used Splunk about 10 years ago after an old colleague installed it on a computer in the corner, and ever since then I have preached about it. If you have log data, of any kind, I’d recommend you give it a go.

The Splunk people have a a few pretty good options for trying Splunk out, as you can either use Splunk Storm or Splunk Free. The first option is obviously hosted, and has a generous storage option, but also does not allow long term storage of data. I send system log data to Splunk Storm.

However, what if you don’t have a lot of data, but you want to keep that data forever? After reading Ed Hunsinger’s Go Splunk Yourself entry about using it for Quantified Self data, I knew I had to do the same.

From personal experience, Splunk requires at least 1GB to even start. You can probably get it to run on less, but I haven’t had much success. This leaves two options: look at Low End Box for a VPS with enough memory (as cheap as $5/month), of use OpenShift. Red Hat generously provides three “gears” to host applications, for free, and each with 1GB of memory. I have sort of a love-hate relationship with OpenShift, maybe a bit like using OAuth. Red Hat calls OpenShift the “Open Hybrid Cloud Application Platform”, and I can attest that it is really this. They have provided a method to bundle an application stack and push it into production without needing to fuss about infrastructure, or even provisioning and management of the application. It feels like what would happen if Google App Engine and Amazon’s EC2 had a child. Heroku or dotCloud might be its closest alternatives.

Anyways, this isn’t a review of OpenShift, although it would be a positive review, but instead on how to use OpenShift to host Splunk. I first installed Splunk in a gear using Nginx as a proxy, and it worked. However, this felt overly complex, and after one of my colleagues started working on installing Splunk in a cartridge, I eventually agreed this would be the way to go. The result was a Splunk cartridge that can be installed inside any existing gear. Here are the instructions; you need an OpenShift account, obviously. The install should take less than ten clicks of your mouse, and one copy/paste.

From the cartridge’s GitHub README:

  1. Create an Application based on existing web framework. If in doubt, just pick “Do-It-Yourself 0.1” or “Python 2.7”
  2. Click on “Continue to the application overview page.”
  3. On the Application page, click on “Or, see the entire list of cartridges you can add”.
  4. Under “Install your own cartridge” enter the following URL: https://raw.github.com/kelvinn/openshift-splunk-cartridge/master/metadata/manifest.yml
  5. Next and Add Cartrdige. Wait a few minutes for Splunk to download and install.
  6. Logon to Splunk at: https://your-app.rhcloud.com/ui

More details can be read on the cartridge’s GitHub page, and I would especially direct you to the limitations of this configuration. This will all stop working if Splunk makes the installer file unavailable, but I will deal with that when the time comes. Feel free to alert me if this happens.