Beginning a new approach to blogging with Eleventy

15 October 2018

As foreshadowed in my last post, I've now stopped publishing with Ghost and moved to the static site generator Eleventy. I plan to write about some of the more technical aspects of how I did this in a future post, but for this one I want to explain broadly what I've done, and why.

The 'classic' case for, and explanation of, modern static site generators was made by Matt Biilmann in Smashing Magazine back in 2015. For Biilmann, it's mostly about performance, but for me it's more about control and a sense that publishing on the web has become far more complicated than it needs to be. As readers of my Marginalia series may have noticed, I've been reading and listening to a lot of stuff recently about minimal technology, and a 'brutalist' approach to web design. Publishing with a static site generator allows me to control to a much greater degree what's actually getting published, and removes a bunch of technology from the stack needed to get it onto your screen. This means that there are fewer points of failure and less things needing to be patched or upgraded.

Ironically, whilst systems like WordPress and Ghost require more configuration and maintenance than a static site, they also provide less control over the things that matter to me. Some static site generators will be more flexible than others in this regard, but part of the reason I settled on Eleventy is that it provides a very large degree of flexibility and customisation - Zach Leatherman has done a good job of ensuring his system doesn't try to do everything itself, filling a similar niche to Metalsmith but a bit less intimidating. There were four specific things I wanted total control over:

  1. everything in the HEAD section generally
  2. the referrer meta tag
  3. the meta tags for open graph and twitter images
  4. permalinks

The <head>

Most web publishing systems provide users, or at least theme creators, the ability to inject tags and scripts into the head, header and/or footer of each page. This is useful, but what's usually not possible is to remove tags that the system itself creates. This may not necessarily sound like a big deal, but it has important ramifications. Firstly, if your system automatically inserts a particular meta tag that has multiple potential values, you have to go with the value chosen by the system designers. Secondly, if they decide to insert a particular script or tag you don't want there at all, there's not much you can do. In the case of Ghost, it does both (if you share my views about best practices in web publishing).

Firstly, Ghost has baked Google AMP support into the publishing system itself: you can't not publish AMP-friendly posts using Ghost. The AMP site does a good job of hiding the fact that this is a Google project, but most of what you need to know about it is in the headings under About - Who is AMP for?. The four things listed are 'AMP for Publishers', 'AMP for Advertisers', 'AMP for Ecommerce' and 'AMP for Ad Tech Platforms'. AMP is sold as a way to improve the reader experience and speed sites up, but 'Ad Tech Platforms' (like Google) are the cause of the problem AMP is allegedly trying to solve. AMP is really about Google gaining more data and more control over publishing, and I want nothing to do with it.

Secondly, Ghost uses the referrer meta tag with the content attribute set to no-referrer-when-downgrade. This means that any link from an https site to an http site won't pass on the referer header in the http request: but if I link to an https page it still will. I want my referrer tag to be set to no-referrer, for the reasons outlined in Eric Hellman's useful post about the privacy implications of the referer header and referrer meta tag. Basically, it's nobody's business if you're reading my blog posts (more on this later).

I wrote a little bit about Ghost's strangely forgiving attitude to permalinks in my last post. The particular problem I had when it came to migrating my blog to a static site was that I wanted to maintain all the existing permalinks, but change the URL pattern for any new posts. In systems like WordPress and Ghost this is more or less impossible unless you start mucking around with redirects on the actual webserver. Eleventy allows me to do something pretty cool, however, and it's very simple. Each post is written in a Markdown file, and has 'front matter' at the top with basic metadata. A basic frontmatter section looks like this:

layout: post
title: Beginning a new approach to blogging with Eleventy
author: Hugh Rundle
tags: ['GLAM blog club', 'blogging', 'post']


With Eleventy you can optionally add a permalink value, which will override any generic rules you have in place regarding how page URLs are created. I wrote a script to extract all my old posts out of Ghost, and among other things it puts the permalink in the frontmatter of the extracted file. This allowed me to avoid breaking old permalinks which use the format YYYY/MM/DD but stop using a dated format for new posts (it seems like unnecessary and ugly cruft when the date is at the top of each post anyway).

Reducing bloat and trackers

Given my minor tirade about AMP and commitment to no-referrer above, you may be wondering about tracking scripts. has an analytics system built in, and when I moved to Ghost I set up my own Matomo (formerly Piwik) instance. At the time I felt this was a good compromise between my desire to know which pages were most popular on my blog, and my desire not to feed the Google machine with your browsing habits. Even though the stats only go to me, however, having a tracking system is a signal that I think tracking reading habits is normal and reasonable - and also that it's useful. I'm quite doubtful about all three: I literally can't remember when I last checked the stats on my blog or the newCardigan website, which both used my Matomo analytics server until yesterday, and whenever I have looked at them they give me information I already know and can't do anything useful with: my two most popular posts ever were one that was about a lack of investment in and understanding of core technology in librarianship (widely misinterpreted as a post protesting the use of 3D printers) and a post about migrating from WordPress to Ghost. I mean, it's vaguely interesting, but is it worth keeping a PHP application and MySQL server running, and normalising surveillance? Probably not. I hope other people find my blog posts interesting, but I'm usually writing them for me as much as for anyone else.

The other thing about tracking scripts is that they add a bunch of useless bloat to every page. I'm still using two JavaScript libraries (see below) and was slightly worried about this. But the entire minified momentjs library and my single Matomo tracking script are almost identical in size! Removing analytics allowed my to add momentjs basically for 'free'.

The final thing I did to remove a tracking vector I'd inadvertantly added to my Ghost site was to strip out all the script links from embedded tweets. When you 'embed' a tweet, you get some HTML like this:

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Has there ever been a better reason to be divorced by your husband? A gem from the archives, December 1938 <a href=""></a></p>&mdash; Tom D C Roberts (@TomDCRoberts) <a href="">October 10, 2018</a></blockquote>
<script async src="" charset="utf-8"></script>

Do you see that down the bottom? <script async src="" charset="utf-8"></script>

That's calling a JavaScript file from Twitter's servers. I have no idea what's in it, or when it might change: and neither do you. But I can't imagine it doesn't have a tracking system built into it. This is one of several reasons I removed 'Tweet this' and 'Post to Facebook' buttons a few years ago, but I somehow forgot that Twitter embeds do this too. I added a little function to my post-extraction script to delete any of these Twitter scripts it found. Conveniently, the HTML without that script works just fine as a blockquote - just a pity they don't use a <cite> tag.

Lastly, I removed the biggest source of bloat: images. This brings me back to my third requirement, which you may have noticed I haven't yet addressed: meta tags for open graph and twitter images. But first things first. Before migrating, I did a speed test of my homepage using Pingdom Tools. My webserver is in Singapore (the nearest point to Australia in the Digital Ocean empire), so there's inevitably going to be a bit of a lag loading pages from here, but my Ghost site was still pretty slow. From Sydney it was calculated to take 3.5 seconds to load, making 37 requests and pulling down a horrendous 3.3MB! The vast majority of that data was images. I've waxed and waned a bit with post images: they're annoying to source, and I'm a fairly text-based thinker, so I don't find images a particularly useful addition to most blog posts - particularly if it's just a header image. On the other hand, they definitely do help to catch my eye when I'm scrolling through social media. What I really wanted was a system that created images that only show up on social media. It turns out there's a way to do that.

I'll save the technical details for a future post, but suffice to say that you can put a reference to an image in a meta tag regardless of whether it's actually displayed on the page. That means you can do this:

<meta name="twitter:image" property="og:image" content="">

The image will appear in the little cards that Twitter and Facebook create when you post a link, but the link in content doesn't need to appear anywhere else on the page.

What's still in

I haven't quite shrunk my blog down to just HTML and CSS. There are two JavaScript libraries, and two additional scripts I'm still using - but this is all in the service of user ease rather than making it flashy.

Rubik web font

I've relied on the system font Tahoma for base text, but to make things slightly more interesting I'm using 'Rubik' for headers. This adds a small amount of extra page load time.


Bigfoot is an amazing jQuery plugin that I use for footnotes. The reason I love it so much is that it is so considered and well thought-out. It works by allowing footnotes to function in a 'web way' - instead of having to scroll to the bottom of the screen to read the footnote, Bigfoot inserts a little ellipses1 instead of the number, and a pop-over when you click or tap on the ellipses, showing the footnote text. The really smart thing, however, is that if you print the page out, Bigfoot basically just switches itself off and the footnotes work the way that is useful in hardcopy.


I wrote a teeny little script to change all the publication dates to relative time (e.g. 'two months ago'). Initially I thought I could do this when pre-processing the page, but then I thought about it for five more seconds and realised that was possibly the dumbest idea I've ever had: it would show the time relative to whenever it was processed, not when you were reading it! The script doesn't have to be big because I'm using momentjs.


This is a JavaSript library that allows you to manipulate the display and creation of dates. Whilst JavaScript has a built-in Date() constructor, dates are notoriously tricky (once you start to consider timezones, Daylight Saving Time and other weirdness). I also used this to make sure my permalinks showed the correct date, as I described in my last post.


I really wish I didn't have to include jQuery, but Bigfoot is just too awesome, and it relies on jQuery. At some point in the future I may attempt to recreate the functionality I like about Bigfoot without resorting to jQuery - JavaScript has come a long way in the last few years, and allows some things that simply weren't possible when jQuery was created.

The nice thing about all the JavaScript is that if you turn JavaScript off, everything still works just fine. The only thing that will be different is the post dates will be in slightly unwieldy (but still understandable) ISO formats, and footnotes will like standard HTML footnotes.

So what's the upshot for you, the reader? You get:

  1. Enhanced reader privacy, with every tracker I could think of removed from all pages.
  2. A much faster page load
  3. Significantly less data to download
  4. Glorious Brutalist web design

Using the Pingdom Tools test I used on the old site, the homepage now makes only 9 requests (all local i.e. requesting files from the same server), takes 1.24 seconds to load, and loads just 176kB. Nice.


Like this