Moving to Hugo

After using Metalsmith as my previous blog engine I decided to move to Hugo.

I previously used Metalsmith for the large amount of flexibility and the fact I could easily write new additions to my site.

The downside is that it is time consuming to get right. I spent a bit of time on it, then somewhat stopped. I never got the theming fully finished, or RSS support integrated.

Hugo comes with everything fully configured out of the box. I was able to drop my markdown files into the content folder and generated a new site with no trouble.

So far I have been very happy with this, and can imagine a lot of improvements to the way I generate this site.

I still use github to store the content for my site, rebuild with travis and push all the content to a S3 bucket that I serve out of.

In the future I would love to put CloudFront with a LetsEncrypt certificate in front of the S3 bucket. I’ll also look into using a Lambda such as this one to generate the site. Eventually I should be able to create a simple admin page that can write to the S3 bucket and regenerate the site.

Still, until then, the new theme and functionality has been a great improvement over my old site.

Desktop Config 2016

For the past two or so years, I have been running my desktop computer dual booting Windows and Arch Linux. This has been a good solution for me, being able to use Windows for media consumption and the occasional game and also having a proper Linux operating system as a development environment. It’s occasionally frustrating having to reboot between systems, especially on weekends where I might swap between working and leisure multiple times.

Over December my desktop pulled down a new Windows 10 update. I have Windows set to automatically update and wasn’t particularly worries about it applying updates.

However, this particular update decided it needed to re-partition my computer. I had a pre-existing Windows 8 recovery partition and a Windows 10 partition. Deciding that the Windows 8 recovery partition was too small to convert into a Windows 10 recovery partition, the updater decided to split the Windows 10 partition and create a new recovery partition at the end.

So far so good, this didn’t touch my Linux partitions at all.

The trouble started the next time I tried to boot into Arch.

Initially I couldn’t boot at all as the EFI bootloader was pointing at the wrong partition. Easy fix, just temporarily overwrite the setting on a single boot and then permanently fix it once booted into the system.

After getting the system bootable again, I found the network was down. Not really sure why the network wouldn’t come up, and didn’t really have the time to debug it before I had to head off to visit family.

Not particularly happy that Arch was broken, and slightly bothered that Windows now had more partitions than it needed, I decided to start afresh.

My data was all stored on some spinning disk HDD’s and the two operating systems on a faster SSD.

I wiped the SSD and installed Windows 10 from scratch, letting it create partitions as necessary.

At this point I stopped and thought about whether I wanted to install Arch on another partition again or if I could use a virtual machine to get the best of both worlds.

I have used VirtualBox many times in the past, and whilst it’s perfectly functional, it’s not what I had in mind here. I ended up giving Windows built in Hyper-V a try. I had last used it when Windows 8 first came out and I heard that it had a build in hypervisor. At that point I found myself unable to use it as I needed access to a GUI for some of the things I was working with and Hyper-V had (and still does) abysmal video performance.

What I had missed at that point was that it was deeply integrated into the lifecycle of the machine and very performant.

So I created a new Virtual Machine in Hyper-V and got ready to install Linux.

I decided to install Debian instead of Arch this time, as it’s closer to the Ubuntu servers that I use at work, but still provides a systemd environment that I was comfortable with from Arch.

This setup has been running for a few weeks now, and I have been very happy with it. Hyper-V is aware of the Windows lifecycle and will start when Windows starts and will cleanly shutdown guest OS’s when the host is shutting down.

In addition it seems to be very memory efficient, with dynamic memory allocation. I have assigned my VM 4GB of memory, but Hyper-V is currently only assigned 768MB of memory. Given that it’s running a server edition with very few utilities, this seems generous but sensible.

The biggest problem in this whole setup has been finding a good way to connect. From my work OS X laptop, I can connect over SSH with no problems. On windows the build in Virtual Machine Connection utility provides a functional, but sub-optimal way to connect to the VM. Ultimately I settled on using the Secure Shell extension to Google Chrome to access my desktop.

Unlike when I provisioned my Arch box last time, this machine is fully configured with Ansible. If I need to go through this process again in another six months, it should be a fairly trivial process.

Headless Raspberry Pi

I have been working on getting my Raspberry Pi up and running again.

In the past I have used it as a local fileserver on my network, a networked Time Machine/Time Capsule volume and BitTorrent sync client.

Every time I used it as a local server, it has been as a headless server attached to my router with a network cable and and a USB external hard drive.

The two main problems I have had with this in the past are:

  1. The network and USB shares the same bus, leading to very poor performance
  2. Whilst running the rPi as a headless server was easy, setting it up to be so was not

Point 1 I can’t do anything about :(

As for point 2, this has been somewhat self inflicted as I have used the Noobs bootstrapper for ease of use.

This time though I followed the instructions at Arch Linux ARM. This lets me bootstrap the server into a usable networked state from another Linux machine. Sadly this process doesn’t work from a OS X machine.

To then provision this raw server I created an Ansible repository that has a bootstrap role (to create a user account and enable SSH security) and another playbook to provision services on the rPi once the bootstrap is completed.

Things I Forget Every Time - File Permissions

Whenever I am changing file permissions, I always forget what each numeric value is equal to

  • 1 = Execution
  • 2 = Write
  • 4 = Read

Things I Forget Every Time - Different Implementations of Regex

  • Python
  • JavaScript
  • VIM / Sublime Text / Atom

Things I Forget Every Time - ln

Making a symbolic link I always forget the order of parameters.

ln -s TARGET_OF_SYMLINK WHERE_THE_SYMLINK_IS_STORED

For some reason I always get the two swapped around.

Hosting

Having used GitHub pages in the past, I initially got this blog up and running on it pretty quickly. Over time though, I wanted to start hosting it myself, so I started the transition to AWS.

Domain + DNS

The first stage was migrating my DNS from the free Namecheap servers to Route 53. This was pretty easy, mostly creating some hosting zones and then copy pasting some strings around. At this point I still had Route 53 directing traffic through to GitHub pages.

The second stage involved migrating my domains from Namecheap to AWS as well. Route 53 makes this pretty easy, though you get charged again for domain registration (with domain expiry reset to the date you transferred).

Once the domains are transferred, I pointed the new domains at the the DNS records and everything kept working as per normal.

Hosting

As this blog is a simple static site, I just uploaded the contents into an S3 bucket I created. Once uploaded I turned on the Static Website Hosting, setting the homepage to index.html.

Hooking it up

The final stage was to point the Route 53 hosting zones at the S3 bucket. This is a simple A name alias record that points straight at the bucket. Wait for the DNS entries to expire and watch the site go live!

Getting started with Metalsmith

Metalsmith is a simple little static site generator, which acts a lot like gulp, piping a series of input file through a series of transformations (which are provided by plugins), and then outputs a directory with the resulting site.

I ended up deciding to use Metalsmith due to it’s simple design, ease of getting started and the ability to eventually create whatever kind of site I want. I may end up revisiting the decision, but having all my posts in markdown files should make it fairly easy to transition between any other systems I may want to play with in the future.

Create a new project: npm init, npm install metalsmith --save-dev

From there on, the getting started docs get a little more hazy.

Your best bet is to have a look at the examples in the GitHub repository to get an idea of a few different ways you can configure it. I used the static site, Wintersmith and Jekyll examples to get going.

Metalsmith has two options for configuring it’s behaviour:

  1. As a JS file (that is somewhat reminiscent of Gulp)
  2. As a metalsmith.json file that is parsed by the CLI tool

I am using the JS file approach, and you can see the repository that contains the source code.

So far, it’s working pretty well.

Permalink structure

So, whilst working through my Metalsmith setup, I was faced with the choice of how to structure my permalinks.

The example given in the docs shows the of of just /postName, which works, but gives me a vague concern about having post title conflicts.

In the past I had also seen and used /YYYY/MM/DD/postName.

Getting my OCD on I decided to research the topic a little and see what the current best practices are.

A basic search turns up a ton of results for people configuring Wordpress. The advice given is mostly anecdotal and specified for Wordpress speed reasons firstly and SEO secondly.

Generally the Wordpress community seemed to settle on including the date in the URL, then several years later, decided to drop it as unnecessary and possibly damaging SEO.

Good arguments were given both ways, so I decided to ignore it all and go straight to the predominant indexer of links on the web, Google.

The best resource is the Google SEO Starter Guide which also links to a support article that has additional information.

General notes from the SEO Starter Guide:

  • Simple-to-understand URLs will convey content information easily
  • Creating descriptive categories and filenames for the documents on your website can not only help you keep your site better organized,but it could also lead to better crawling of your documents by search engines
  • If your URL contains relevant words, this provides users and search engines with more information about the page than an ID - or oddly named parameter would
  • remember that the URL to a document is displayed as part of a search result in Google, below the document’s title and snippet

SEO Starter Guide URL best practices:

  • Use words in URLs
  • Create a simple directory structure
  • Provide one version of a URL to reach a document
  • Plan out your navigation based on your homepage
  • Allow for the possibility of a part of the URL being removed
  • Create a naturally flowing hierarchy

Notes from the Webmaster Support Article:

  • A site’s URL structure should be as simple as possible
  • Consider using punctuation in your URLs
    • We recommend that you use hyphens (-) instead of underscores (_) in your URLs
  • Whenever possible, shorten URLs by trimming unnecessary parameters

So given that the shorter the URL the better, the ideal url structure would be something along the lines of /post-name.

Choosing a static site generator

So, given that I want to use GitHub Pages (for now), I have been looking into static site generators.

In the past I have used Jekyll/Octopress, but they felt a little too cumbersome to me. From memory, they were relatively slow to generate a site and (Octopress at least) involved you cloning a repository to get started, which left your git history for the site full of non-content commits.

Starting over I had a quick think about the main things that would be important to me in terms of choosing a static site generator.

  • JavaScript powered
  • Fast
  • Simple to understand
  • Uses Markdown
  • Just create a file and run the build

JavaScript because it’s what I do most of my work in these days, and I find it pretty easy to read and understand.

Fast because I want to reduce the amount of friction involved in writing.

Needs to be simple to understand, so that I can control the build process and tailor it to my needs without much aggravation.

Markdown should be a usable post language, because I am used to it and it is fairly ubiquitous.

Simple to create files, I don’t really want to remember more CLI commands to create files. I just want to touch a file then open in vim to edit. Build it with some standard process then commit and done.

So I started looking around at site generators with these criteria in mind and stumbled on Staticgen, which JokeyRhyme had mentioned to me a little while ago.

Narrowing down my search to the top three JS site generators I ended up with

Being lazy here I discounted Harp as serving needs way more complex than mine, with a built in web server that compiles on the fly, optionally making a static site.

I had a quick look through the Hexo Documentation and decided that it too had way more power than I needed. Plus it had a cli tool that I would forget how to use.

Metalsmith seemed to be just about right. Very simple to use. It doesn’t provide all the functionality of the other two, but allows me to pull in plugins to provide that sort of functionality if/when I want to introduce that complexity.

Hello world!

Hello world!

I just re-discovered that I had this domain and already had it hooked up to GitHub pages.

Deciding to put it to somewhat better use, I am going to actually put it to a bit better use and write up some content about projects I am working on, etc.