Degox – Taking (Back?) my Internet Privacy and Presence

For as long as I’ve used email, someone else has provided it to me for “free”. Detoxing in January is a staple, so why not extend that to weening off Google and call it a degox.

This was supposed to be a short post about my experience migrating to Fastmail. However, it went sideways and turned into an essay. I’ve knitted together various experiences and topics from over the decades. I’ve come to the slow realization that this choice, to take back my Internet Privacy and Presence, isn’t a new year fad to be dropped unceremoneously by February.


A long time ago, in the late 20th century, I signed up to an email service called Hotmail. If memory serves, this was post Microsoft acquisition, so it wasn’t the cool spelling of HoTMaiL (HTML in cool caps) but it did a decent job. Courtesy of Wikipedia, this piece on founder Sabeer Bhatiya states that Hotmail’s launch date was July 4, 1996. This is no mere coincidental date, instead it appears to have been chosen to signify independence and freedom of email. Some online sources claim this was in particular regard to ISPs grip over email but I’m not able to find credible sources on that. What I did find was the Internet Archive snapshots of from 1997 that I feel paints an interesting portrait about privacy, platforms presence and personalization.

From the About page:

Hotmail has created technology that integrates the core functionality of text-based Email messaging with the multimedia and global access capabilities of the World Wide Web. In doing so Hotmail allows the user of its technology to work in one seamless environment both for surfing the Web and communicating via Email. This seamless environment enables disparate platforms and computer systems to communicate with each other.

Sending and receiving Email using Hotmail is as easy as browsing to the Hotmail Web site (, registering, loging in and sending an Email message. By using the Web browser as a ubiquitous Email client, Hotmail brings your personal information i.e. Email to you in a globally retrievable form. Because Hotmail has no service fee, requires no software installation, and has features and functions that surpass many traditional Email packages, Hotmail believes that their Web site will be the most frequented one in cyberspace.

Hotmail is completely supported by personalized advertising. The only requirement to have a Hotmail account is the completion of a brief questionnaire. This information is Hotmail’s key to personalized advertising. Personalized advertising will make each user’s visit to Hotmail a unique experience and will allow advertisers to target specific audiences.

About page, Hotmail, 1997 –

So from day 1 of the Hotmail business model, and day 1 of my public online presence, personal data was being consumed in the name of advertising.

Email delivers people

In February 1999, Microsoft declared that Hotmail had grown from 0 to 30 million active members in 30 months:

Indeed, by adding 20 million members since the beginning of 1998, Hotmail tripled its size in less than one year.

MSN Hotmail: From Zero to 30 Million Members in 30 Months, Microsoft –

Pay attention to that number 20 million.

When it comes to “free” services, a much quoted phrase on the Internet can be attributed to blue_beetle’s MetaFilter post in 2010:

If you are not paying for it, you’re not the customer; you’re the product being sold.

User-driven discontent, MetaFilter, 2010 –

However, on Quora of all places, Brent Thomson states an earlier form of usage in respect to an interview about the 1973 film Television Delivers People. It is short, the whole thing is on YouTube if you’d like to view it yourself. To save you some time, here are some amusing screeenshots:

Mass media means that a medium can deliver masses of people.

Commercial television delivers 20 million people a minute.
It is the consumer who is consumed.

You are the product of t.v.

You are delivered to the advertiser is the consumer.
Every dollar spent by the televsion industry in physical equipment needed to send a message to you is match by forty dollars spent by you to receive it.

I find that last one quote on costs particularly prescient in the context of this post. Most everything has a cost. This is not novel or revelationary. Yet, many baulk at the idea of paying for some services that have “always been free”.

Kagi search, markets itself as a premium ad-free search engine. It has a whole page dedicated to answering the question “Why pay for search”. They quote the appendix of Sergey Brin and Larry Page’s oft-cited paper The Anatomy of a Large-Scale Hypertextual Web Search Engine. Go read that page, since it’s good. However, I’ll borrow the last bit and change the emphasised bolded parts

…we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

The Anatomy of a Large-Scale Hypertextual Web Search Engine, Sergey Brin and Lawrence Page, 1998 –

The Internet is for End Users

In 2020, the Internet Architecture Board (IAB) publish RFC 8890 – The Internet is for End Users. The author, Mark Nottingham (mnot), gives an insightful look at the motivating factors and more in a blog post.

IETF (Internet Engineering Task Force) documents by their nature tend to be techincal, e.g. describing how an Internet protocol works. The IAB position encourages thinking about how technoology affects end users. Human users. RFC 8890 isn’t a hammer for every Internet nail. As mnot puts it:

First of all, a reality check. IETF decisions are only about documents; they don’t control the Internet. Other parties like equipment and software vendors, end users, platform and network operators and ultimately governments have a lot more say in what actually happens on the Internet from day to day.

RFC8890: The Internet is for End Users, Mark Nottingham, 2020 –

In my own reasoning, I like to keep RFC 8890 as a benchmark. Not to declare a pass or fail, but to understand where on the field something sits.

During the IETF 98 meeting in 2017, Niels ten Oever and David Clark discussed “Can Internet Protocols Affect Human Rights?”. The whole discussion is up on YouTube. The thing that stuck most in my mind was the “tussle” that David talks about. I admit to never reading David et al’s 2002 paper “Tussle in Cyberspace: Defining Tomorrow’s Internet”. I’ve only realised while writing this post that RFC 8890 cites the paper – a convenient coincidence. In the context of this post, the abstract immediately sets things off with a bang:

different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. We call this process “the tussle”.

Tussle in Cyberspace: Defining Tomorrow’s Internet, David Clark, John Wroclawski, Karen Sollins, and Robert Braden, 2002 –

Most everyone has a stake. This is not novel or revelationary. Users, providers, consumers, advertisers, 1978, 2020. Like fashion, greek tragedy or Madonna, constant reinvention yet the same old same old.

The Cost of Business

Kagi’s “Why Pay for search” page does some maths about Google’s revenues:

To estimate the revenue per user, we can divide the 2023 US ad revenue by the 2023 number of users: $76 billion / 274 million = $277 revenue per user in the US or $23 USD per month, on average! That means there is someone, somewhere, a third party and a complete stranger, an advertiser, paying $23 per month for your searches.

Why Pay for Search, Kagi, 2024 –

I can’t vouch for these numbers, nor do I want to. But I wonder how close to Televisions Delivers People’s 1:40 ratio of spend vs income Google gets.

As an odd Juxtaposition, in 2020 the UK communications regulator Ofcom investigated UK ISPs over charging for email services. Some people that came online in the same period I signed up for Hotmail, simply used the ISP provided address. Home Internet access is a hotbed of competition in the UK, with different providers offering attractive deals to grab market share.

Switching ISP is pretty easy but what about the old email address? Migrating can be a scary, painful, and arduous task that few people in this segment want to do. BBC consumer show Money Box investigated complaints from the public about ISPs charging to maintain access to email services. The costs (if any) and the scope of email services (if any) vary across ISPs. BT, for example, surrupticiously claims that they’ll keep the email service running for free but degrade it to a web-based service only. The BBC spoke with an Iain Stuart from Cardiff (hometown FTW) on the matter:

That free, web browser-only service is also not something that appeals to Mr Stuart.

“I just want to be able to use my email account as I have done for years and years.”

Email charges: ‘They’ve got you over a barrel’, BBC, 2020 –

For Iain to be able to just carry on with his normal way of doing things, BT charge £7.50 a month for a “Premium email” service. I shit you not, this is how BT themselves bluntly describe the service:

The Premium email product gives you exactly the same functionality as the Standard email product you get with broadband from BT, but you have to pay for it.

What is Premium email and how do I get it?, BT, 2023 –

And if you look at the small print in “Who can have it?”, WTF:

Customers who live in the UK with a UK bank account. Although we provide the ability the use the service outside of the UK, you must not use it solely for this purpose. If you now live permanently overseas you must no longer use the service

What is Premium email and how do I get it?, BT, 2023 –

And BT ain’t the worst of the bunch. If you were unfortunate enough to trust in Virgin Media but want to switch, you get 90 days before they completely delete the account.

As a result of their investigation, Ofcom told Money Box:

We’re concerned about any industry practices that could put people off from switching, so we’ve been gathering more information from providers about their different approaches to this issue.

We have concerns that some customers are not being treated fairly, and will be raising these with providers.

Email charges: ‘They’ve got you over a barrel’, BBC, 2020 –

I find this a very astute observation. There’s a cost to consumers to switch, whether they have to pay for it in cold hard cash, or time and stress manually migrating. None of that seems fair to the end user. Ofcom provides some quidance on the matter of switching. Unfortunately it recommends “free” services while overlooking the implicit cost of becoming the product.

There’s precedent in the UK about making switching easier and fairer. Just over 10 years ago the UK governement introduced the Current Account Switch Service. It promised to make it easy to switch everything across: taking 7 days to complete, redirecting from the old account for 13 months, and guaranteeing against financial losses in the case of problems. It isn’t perfect but its something. To pick a point out of the 2013 launch annoucement:

Low levels of switching in the UK current account market are a major barrier to competition between banks and better services for consumers.

Traditionally, people get an account and then they stick with it, as changing your account is seen as complicated, time-consuming, and risks getting you into real difficulty if it goes wrong.

75% of personal current account holders have never switched, with nearly 1 in 5 saying this was because of the hassle and potential risks.

Bank account switching service set to launch, UK Government, 2013 –

There’s technical measures that can make email migration less painful. However, in my experience they tend to be best effort and vendor-driven. And beyond the skills or patience of the population at large. I don’t know if there are interoperable standards that could make things smoother. But I do know that the bank account switching example shows that a mandate for interop with non-technical measures and guarantees can go a long way.

More recently, the UK’s Competition and Markets Authority (CMA) and Ofcom have been taking a gander at competition in cloud services. It’s a different sector to email and ISPs but this all sounds quite familiar:

The features which Ofcom is most concerned about are:

  • Egress fees – charges that cloud customers must pay to move their data out of the cloud
  • Discounts – which may incentivise customers to use only one cloud provider
  • Technical barriers to switching – which may prevent customers from being able to switch between different clouds or use more than one provider
CMA launches market investigation into cloud services, UK Government, 2023 –

Migrating my Email

From day 1 of my email presence my data was being consumed in the name of advertising. I didn’t stay using Hotmail (or one of the 25 different things Microsoft has called) in the intervening years.

In the early 2000’s I was fed up of Hotmail’s measly megabytes of storage, requiring me to waste time deleting stuff. In 2004, Google released gmail with a whole gigabyte (!) of storage (later evolving into infinitiy plus one, see below). Somehow I managed to snag an address quite early. Definately before the whole UK trademark problem started in October 2005 that made people have to register an address until 2009.

My migration from hotmail to gmail back in 2004 was just a hard cut. I don’t recall trying to do anything fancy with forwarding etc. The hotmail address was a honeypot for spam and making a clean break was a nice upside.

When I first started to engage with Internet standards and public mailing lists, I naively used my employer-provided address. When I changed jobs, I had to give that up (aside: maybe the BBC could borrow BT’s business model and charge ex-employees £7.50/mo to keep email and redux access 🙂 ). While IETF lists are publically archived, it was a real drag to lose email inbox search features to quickly find some relevant past dialogue. It also meant that the mistake I made when attempting to unsubscribe from the QUIC mailing list is forever recorded.

On the topic of standards, I’ve been helping out with privacy-preserving technologies where I can. There are entities out there that are attempting to build services or business on the back of privacy-first designs. For instance, in the MASQUE Working Group, we developed the standard for transporting unreliable messages by extending HTTP, allowing for UDP and QUIC tunneling, which went on to being deployed in novel ways.

Components involved in a nested tunneling setup. From left-to-right: Client, Proxy 1, Proxy 2 and Server.

As 2023 drew to a close, it seems time to pull these disparate strands of thought and feeling together and stick some money in the pockets of the businesses that want to put privacy first.

Requirements for gmail alternative

When considering migrating to a gmail alternative, the key goals for me were:

  • Good browser and app-based access
    • I do a lot of open standards and open source work via a range of devices. The experience has to be fast and consitent across them all. Bonus points for emulating the parts of gmail design that I enjoy.
  • Address(es) at my own domain but no self-hosting
    • This needs to be a managed service that will just work when it comes to “random email interop crap” that I don’t have the mind to care about.
  • gmail import and transition
    • Because of gmail’s inifity plus one design I don’t delete any email. I also just leave them all in my inbox – I call this the spring-loaded plate warmer and dispenser approach. Furthermore, I tend to determine if I need to read an email or not from a notification, so thousands of emails will forever remain unread. All this needs to come across.
    • Allow me to continue using both old and new addresses while the transition happens. Auto synchronization from old to new ideally.
  • Privacy and security
    • Don’t snoop on my emails to sell my data. Provide tablestakes security features like multifactor auth (MFA).

These requirements don’t strike me as that niche. I’m sure there’s a few email providers that could satisfy them. I don’t need PGP encrypted bla bla – most of the the stuff is just echoing what’s publicly on IETF archive or GitHub anyway. I was also willing to pay some dough if the process is easy and successful.

Migrating to Fastmail

Over the last few years, as part of IETF participation, I’ve had the pleasure to loosely interact with Bron Gondwana, Fastmail CEO. Email isn’t really in my direct circle of interest but there’s enough overlap of various technologies and interests that Bron and I bump into each other now and then. He’s never done a hard sell of Fastmail to me but I’ve been intrigued by the focus on privacy and commitment to open standards and wanted to see what’s what. (further disclaimer: I’ve never done any sell of any Cloudflare service in return, mostly because DDoS is handled by other folks).

Putting me off was the fear, uncertainty, and doubt of migrating an address I’ve had for about 20 years. Despite the reasurrances, and despite being techincal, its scary. After a year or so of procrastination, I finally decided to take the plunge and here are my experience notes.

Hit up, it offers a free 1 month trial so that feels long enough to figure out if it will satisfy the requirements. It asks for some credentials to create an account, I enter my details using the domain of this website, click OK, and I’m straight into a UI that feels like gmail.

There’s a nice prompt dialog at the top to remind me of things I need to do to complete the set up. It states “Your email won’y work until you update settings at Cloudflare. Whoah, automagic. This website is fronted by Cloudflare, so its a nice touch Fastmail detects that. A minor nit though, it suggests I change settings at, whereas it would be speedier to link straight to The settings that Fastmail needs me to change are DNS records.

First, some MX records. I already had two MX records, which I think were dragged across when I migrated from terrible shared hosting to a VPS. I deleted those and manually added the ones Fastmail stated.

Second, some CNAME records. The CNAME records are pointing to something to do with DKIM (DomainKeys Identified Mail), which I know nothing about. And I’m glad I don’t need to know. Just set and forget, Fastmail seems to have this covered. The guidance said “make sure to click the orange cloud icon so it turns grey”. The reason for this is to control whether the name ultimately resolves to a Cloudflare IP address(orange cloud) or a non-Cloudflare one (grey cloud). Minor nit: I think the Cloudflare UI changed and the text is stale. Now, the user controls orange or grey using a toggle button instead of the cloud directly.

Third and final, a TXT record for SPF (Sender Policy Framework). More stuff I don’t need to worry about. I again had an old record with values from the shared hosting, so deleted it and add a new one.

After all that, there was a “Check Now” button that went off and did some checks and everything seemed to pass. Lovely jubbly. The process is straightforward and took a few minutes but it is a bit manual and repetitive. I did appreciate that Fastmail offered for me to save progress at each stage to come back. I didn’t have to worry about getting everything done in one lump with distractions. It would have been really nice if there was more integration or automation to set these DNS records – requiring a human to copy/paste stuff is error prone. Cloudflare offers an API to control DNS records, which might hit some permissions issues. Alternatively, Fastmail could have provided a BIND zone file download and allow the user to do a manual bulk import of the required records.

Back to my inbox and the dialog at the top of the page has crossed out the setup task. Nice! I send a test email from a different account to the new one and it appears in my new inbox. I try in the reverse direction (Fastmail to gmail). I’m prompted to verify my account via an SMS message, with the explanation that this is to defeat email abuse. Fair dinkum. I send an email and then repsond to it with emoji reaction:

During the process, I noticed that Fastmail didn’t have a nice little profile pic avatar. The way to get this sorted out was good old gravatar, the Fastmail instructions were easy to find. Although maybe it would have been nice to stick this in the task list because I’d bet a lot of people overlook it.

Now to import gmail stuff. In the settings, there’s a “Migration” panel option, and clicking the “Gmail / Google Workspace” option takes you to the gmail import page. It says “Make sure IMAP is turned on in your Gmail account”. Oh goodness, I hated all the POP vs. IMAP stuff back in the day; and in the meantime JMAPs been added to the mix.

Clicking the Learn more link takes you to a Google page titled “Add Gmail to another email client“. And confusingly on that page it says

Gmail users with a personal Google Account

In the coming weeks, the option to “Enable IMAP” or “Disable IMAP” will no longer be available. IMAP access is always enabled in Gmail, and your current connections to other email clients aren’t affected. You don’t need to take any action.

Add Gmail to another email client, Google, 2023 –

The google page doesn’t have any publish date, so the phrase “in the coming weeks” it useless. I closed the tab and just smashed the “Sign in with Google” button. I went through a few prompts about giving Fastmail access to my gmail account and once done it declared that I could import ~151,000 emails, 5 contacts, and 2 calendars. I’m pretty sure I selected all 3 options and started the import.

Understandably, copying 151 thousand emails could take a while. The Fastmail UI states so. Within the hour the contacts had finished but there was no sign about the progress of the email import. Reading some Fastmail help pages, it again reiterated that it might take a while and that I should not start another import, lest I might end up with duplicates. So I went off to play Powerwash Simulator for a couple of days.

After 3 days, there was no update. My guess was either some issue in the service-to-service calls, or that I’d screwed something up (a PEBKAC error). I really didn’t want to end up copying 300 thousand emails across, so I opened a support ticket. In a couple of hours, I got a nice reply from someone named Rowena who checked my account and gave me some tips. Tried the import process again and got a positive progress indicator. It took just under 1 hour to pull over everything, which was fab. I can now see all 151 thousand emails total and my inbox reports 27 thousand of them are unread. Sorry, not sorry inbox zeroers. As Wired put it:

Forget everything you think you know about inbox zero: it’s completely and utterly wrong.

Everything you thought you knew about inbox zero is wrong, Wired, 2020 –

I also enabled “keep fetching new mail” so that things will automatically stay in sync between gmail and Fastmail while I slowly manually transition email addresses over.

My initial impression is that all requirements have been met. The setup and import was fairly straightforward. Apps are snappy and close enough to what I’m used to. Customer service’s prompt reply was awesome. Some minor teething issues and stale documents but thats to be expected.

Something is stressing me out however. The Fastmail tab puts the inbox unread count as the first element in the string, and that number is effectively meaningless to me. Since I’m not used to the Fastmail logo quite yet, trying to find the inbox tab is currently hard. In contrast, gmail uses an “Inbox (count)” pattern. It would be nice if there was a way to customise the page title string but I couldn’t find anything.

Wrapping up

As I said at the start, this was meant to be a short post about my experience “degoxing” by migrating away from gmail. It instead turned into an essay about technical and political aspects of services beyond email, beyond Internet, and back as far as the 70s.

On one hand, gaining control of something you’ve given away is taking it back. On the other, if you never had it in the first place is it strictly getting it back? I’m not a lawyer but I seem to recall some stories about not being able to sign away fundamental rights, even if a piece of paper tries to claim otherwise.

Coupling Television Delivers People with the insipidness of ISPs charging people to maintain access to their old email… I’m reminded of the time I got to see Bryan Cranston on stage in 2018’s Network. A ground swell of distate from end users led to Ofcom starting an investigation. And it doesn’t seem like an isolated incident. Maybe there’s something in that. I don’t have all the answers. There’s tension and tussle between paying for stuff one way or another. So I’ll close with the the words of Howard Beale:

I want you to get mad! I don’t want you to protest. I don’t want you to riot – I don’t want you to write to your congressman because I wouldn’t know what to tell you to write. I don’t know what to do about the depression and the inflation and the Russians and the crime in the street. All I know is that first you’ve got to get mad. You’ve got to say, ‘I’m a HUMAN BEING, God damn it! My life has VALUE!’ So I want you to get up now. I want all of you to get up out of your chairs. I want you to get up right now and go to the window. Open it, and stick your head out, and yell, ‘I’M AS MAD AS HELL, AND I’M NOT GOING TO TAKE THIS ANYMORE!’ I want you to get up right now, sit up, go to your windows, open them and stick your head out and yell – ‘I’m as mad as hell and I’m not going to take this anymore!’ Things have got to change. But first, you’ve gotta get mad!… You’ve got to say, ‘I’m as mad as hell, and I’m not going to take this anymore!’ Then we’ll figure out what to do about the depression and the inflation and the …

… surveillance capitalism crisis.

2 Meg in the Embed, and the little tweet said “Give over”

As the embedded tweet says, I was shocked to discover that a single Twitter embed – inserted into a WordPress blog using the native embed feature – more than doubled the download size of my site, caused by dozens of Twitter JavaScript resources, all fetched using HTTP/1.1. For tweet that contains just text from me(!), it is unpalatable that so much – well, crap – is pulled in to render it. I’ve come up with a solution now but before that, I sent the above tweet in the hope that someone might have a magic fix. My plan sort of worked, it generated a lot of replies from people more knowledgable about web development than me. The discussion was very interesting, I recommend you read it. But if you don’t have the time for that my summary is:

  1. This is not a “new” problem.
  2. Lots of people assume my website is built on some kind of Server Side Rendering paradigm where I have a build step before deploying static assets.
  3. There are lots of “clever” techniques that can be applied during the build step in order to avoiding the high cost of the Twitter embed code. The meaning of clever in this context seems to vary person-by-person.

Points 2) and 3) were quite eye opening for me. I like to play “dummies advocate” with technology and think about how someone on a managed blog platform might respond to such advice. Could the realistically change their entire CMS over from a nice WYSIWYG editor, to a bunch of technologies with unfamiliar names that need to be installed and managed via continuous integration? While something like Github Pages is nice, elegant and simple for techies like me that use GitHub and Markdown on a daily basis, it doesn’t fly with the family and friends I play tech support for. Imagine how hard it is to simply explain what a repository or a pull request is to someone that has never heard of source control. Some might argue that dynamic sites are too slow in this day and age but read the Background section for why I don’t agree.

I tried to express my point in the Twitter discussion and, to be fair, some pragmatic answers came out. For instance, one suggestion was to just take a screenshot of the tweet and add a hyperlink to Twitter. That actually seems pretty fine, except it would still gall me to encode short text in a bloated image. And it would probably be more work than I could be bothered to do. I hope Twitter might consider offering a “light” embed but I won’t hold my breath.

Towards the end of the replies I got this one from Steren:

This seemed more up my street. Perhaps I could just manually copy the text out of tweets and stick them in an HTML block in my WordPress editor? Manual work sucks though, surely someone has a script for this right? I Googled for a bit, and started reading into the deep corners of Twitter APIs, backlash over past changes, and the annoyance for the requirements for OAUTH access to do things. That last point in particular seems to be something that affects solutions to the ligher-weight embed challenge, you have to get an API key, and I really can’t be bothered with another thing to worry about.


During my search, I found Arthur Rump’s page Fallback styling for embedded Tweets. This was awesome because it explains that:

Embedding a Tweet on your website is easy to do. Find the tweet, click on Embed Tweet, copy, paste, done.

Note how all the actual content of the Tweet is just text in a blockquote. That’s great because if the script does not load, the content you wanted to share is still there and readable. This could happen in situations where Twitter is entirely blocked (whether by a company or a nation-state), the user has JavaScript disabled, or because the script is blocked by Content Blocking in Firefox. However, this means that Tweets will be rendered as blockquotes by default.

Arthur Rump

Sure as dammit, after the </blockquote> is <script async src="". And in my case, I explicitly want to block this thing that loads all the crap! And I want to do this, so that users don’t have to.

So my solution that removes most of the manual steps is a one-liner:

$ curl -s[url of tweet] \
| jq -r '.html' | \
sed 's/class="twitter-tweet"/class="twitter-tweet" data-dnt="true"/' \
| sed 's/<script.*<\/script>//g;' | \
tr -d '\n'

Or a simple bash script I like to call

#! /bin/bash

curl -s$1 | jq -r '.html' | sed 's/class="twitter-tweet"/class="twitter-tweet" data-dnt="true"/' | sed 's/<script.*<\/script>//g;' | tr -d '\n'

For example, to grab the tweet that I embedded at the top of this page:

$ ./

<blockquote class="twitter-tweet" data-dnt="true"><p lang="en" dir="ltr">Due to a single embedded tweet, it seems my website at <a href=""></a> loads about 2.5 megabytes of JS across a dozen HTTP/1.1 requests to <a href=""></a>. WTF, this doubles the total size of download resources. I think I&#39;ll just delete the embed.</p>&mdash; Lucas Pardue (@SimmerVigor) <a href="">January 2, 2021</a></blockquote>

Then, in WordPress embed the tweet by adding a Custom HTML block and pasting the blockquote. As Arthur points out, by default it will look quite plain but it can be fixed using CSS. So I just borrowed Arthur’s and added it as a global custom CSS change. Here’s what it looks like in the editor:

Screenshot of WordPress WYSIWG editor with HTML block.

Performance Results

So did my work change anything? Here’s a before/after example for the main homepage. I think this is a success even if the embedded tweet is functionally and stylistically basic. It’s better than just deleting the embed…

With crap: ~50 requests, 3.2 MB transferred, 4.4 MB resources.

Sans crap: 31 requests, 453 kB transferred, 956 kB resources.

Note that because of lazy loading, scripts or images don’t appear to have a huge effect on some of the critical events in either case. But it sure doesn’t offend me to see that, due to Cloudflare’s speed features (see Background) DOMContentLoaded and Load times are under 200ms, with Lighthouse reporting a Performance of 96 based on First Contentful Paint 0.4s, Time to Interactive 0.4s, and Largest ContentfulPaint 0.5s. I lost points on Cumulative Layout Shift but I can fix that some other day.

Page load with Twitter embed code. 50 requests, 3.2 MB transferred, 4.4 MB resources.
Page load with Twitter embed code. 50 requests, 3.2 MB transferred, 4.4 MB resources.
Page load with no crap. 31 requests, 453 kB transferred, 956 kB resources.


My annual end-of-year tradition is to login to my blog and fiddle about. I make grand promises that I’ll do more blogging each year and typically fail. But I also take the opportunity to make some technical change or improvement. This time around I intended to give Cloudflare’s Automatic Platform Optimizer (APO) a spin to see how much it could improve my WordPress-powered blog. (Disclaimer: I am a Cloudflare employee but I don’t work on this product. My clever colleagues do, and I was keen to see just how turnkey this solution was and its impact on a typical blog, maintained by a lazy owner such as myself would be).

My jist of APO is that it makes your slow, dynamically-rendered, WordPress site super fast by caching everything in Cloudflare’s edge. It does this via a WordPress plugin that magically monitors the site and coordinates with Cloudflare to rapidly purge and cache whenever there are changes. Yevgen and Sven’s blog goes into some great detail on the matter

My speciality is on the network protocols side of things. So whenever I’m investigating a website, I’ll fire up the Dev Tools Network panel and start looking at what requests are happening, where they going, how they are performing etc. For deep inspection, I’ll also head over to WebPageTest, but for quick tests local dev normally gives some good indications. The network trace for my site with Twitter embed code was shocking. I wasn’t sure at first where it came from and had to hunt through the blog posts to find it. The embed is a minor decoration and the blog would not have lost anything if I simply removed it. However, I’m happy with the solution I came to especially because it integrates with my existing workflow. I now benefit from loading more content via APO and other speed features on Cloudflare’s edge.

Dicey Dungeons is a delightful duende

Dicey Dungeons is a roguelike where you roll dice in dungeons. The premise is a game show but it reminds me more of The Running Man than Wheel of Fortune.

Anthropomorphic die uses non-anthropomorphic die to attack charming Honey Monster in a space suit

This game has simple mechanics on the surface but steadily grows in complexity and difficulty; each successful run is meted with a failed spin of the not-wheel-of-fortune wheel and an unlock of a new character or challenge. Each of these tweaks the core mechanics in a way that makes the next run unique and interesting.

Dicey Dungeons was originally a 7DRL game jam entry. I will admit that although I am a fan of Terry Cavanagh’s other games, I was not initially enamoured with that version.

Terry Cavanagh’s animation of the 7DRL entry shared on –

However, the released version of the game is just pure joy. The design is delectable – the characters are charming and the sound design is spot-on. The music track reminds me of 90s game shows such as Catchphrase, Strike it Lucky and the Krypton Factor. Cutscenes pepper the action just enough without getting in the way, the dialog is sharp and the wit is cutting.

Stereohead is the best character

Recommended? Yes

Investment required? Easy to learn, hard to master

Tips? Overconfidence has consequences. The Kraken is an arse.

2020 gaming in 200 words a week

This blog has been a little quiet in 2019. After a days work it can be hard to think of something worth blogging that isn’t tied to my working day, and without inspiration it becomes difficult to muster up the energy to write anything.

So I’ve decided to create a framework for blogging: a single post, once a week, that reviews a different game in a maximum of 200 words.

To kick things off I’ll start with Crusader Kings II.

Poor Princess Sofie, the end of her life was crap

Crusader Kings II is a grand strategy game where you rule over things. This could be a small county, a country, kingdom or continent spanning empire.

Your character in this game is your dynasty. When your current avatar dies, control (and your lands) are passed to the heir. This generally upsets everyone. It becomes important to have an heir that rubs along well with others, so much of the game is spent planning, politicking, plotting and in promiscuity in the pursuit of perfection.

I’ve only played this as a family building the Kingdom of Wales, which grew to Ireland and Brittany but all ended in tears. The start of the downfall? The successful assassination of my wife via manure explosion. 

This game has such great depth and I sadly only ever scratched the surface. From scouting the planet to find the perfect council members and betrothed for my children, to putting up with domestic disputes.

My overarching impression was that Crusader Kings II is a mix between searching through electronics mail order catalogues trying to piece together some weird board and being in charge of the stories in Eastenders.

Recommended? Yes

Investment required? Lots

Tips? Don’t imprison your heirs

Web protocols ate my hosting

New Year, same old blog. A new style, some broken links and some fixed-broken links. Here’s to yet another WordPress-based blog! Now for some background.

An actual front page of a newspaper.

This website has been through a small number of hosts. It started off being hosted on a subdomain. I then migrated it over to a cheapo shared hosting solution, which worked pretty well for the low volume of traffic that it served. As we enter 2019 I am pleased to present the blog from a cheapo VPS, which is fronted by a free CDN.

Why change?

Coming into 2018 there were two aspects that prompted a reconsideration of my hosting:

  1. I started to take a deeper interest in running my own services over the web protocols I was spending so much time working on.
  2. The march of browsers towards treating http:// urls as insecure, and by virtue restricting powerful features (Web Platform APIs like Service Worker).

My old shared hosting was starting to feel restrictive in the face of these changes but I was in no rush to change things. Domains are cheap as chips, so I switched focus to a different project

The domain was a fun name that was going cheap. For this not familiar with the QUIC protocol, it is a UDP-based always secure and multiplexed transport protocol undergoing standardisation in the IETF. It achieves multiplexing within a single QUIC connection by the use of logical streams.

Around the time I purchased the domain, the options for running a QUIC server that could speak to web browsers were pretty limited. The most straightforward way to get things working was to use Caddy server, a Go-based web server that made use of the excellent quic-go library. If you’re interested in trying it out the Wiki has some instructions that may, or may not, work for you.

Word of warning: at the time was based on Google’s earlier QUIC specification. Google Chrome was the only browser that seemed to interop properly. Google continue to experiment and the interop gets broken pretty often. If you visit and try the connectivity test you are likely to find that QUIC fails, and there is no way for me to detect in JavaScript that this is because the browser outright cobbled it.

Simple-stupid security

Anyway, none of that matters much. What is of more interest is that QUIC’s “always secure” principle matches Caddy’s “Automatic HTTPS on by default” design philosophy. Caddy achieves this by means of Let’s Encrypt, and it does it all behind the scenes without making you waste your time on figuring anything out.

As a technical user I’ve gotten my head around the acronym soup of PKI, CSR, PEM, CRT. However, it all just becomes a PITA for something that is just supposed to be minimal effort or fun on the weekends. In contrast, Caddy and Let’s Encrypt made things so simple-stupid that I was able to do a live demo during a lecture to students at Lancaster University. This took the form of provisioning a new VPS instance (with Caddy pre-installed), creating a new DNS subdomain, and rolling a Hello World config. It took less than 10 minutes.

Tuning web security to the N-th degree

After creating a rough and ready site with Caddy that had great transport security, my attention turned to web security: CSP, CORS, HSTS, SRI etc. I’d never really looked at this before and found it pretty tedious to get right. I appreciate the difficulties in securing the Web Platform in complex User Agents but it sucks to have to rewrite simple button element script calls because of possible injections.

After much effort, scores an A+ on the Mozilla Observatory’s HTTP Observatory tests. You can view results at:

What this exercise taught me is that I benefited from having fine-grain control of the server behaviour. For example, explicitly controlling HTTP headers and using scripts on the server to generate SRI hash values.

Shared hosting rubber gloves

After the success with one site, I took another look at this one. I wanted to secure it using free certificates from Let’s Encrypt, and I wanted to have more control over some of the lower level stuff.

The admin panel of the shared hosting felt like trying to scratch with rubber gloves on. Worse still, they wanted to nickel-and-dime me to pay for certificates.

Migrating the whole site to something else would require effort and time I didn’t have over the summer. So I took a different kind of quick, simple-stupid measure: I signed up for Cloudflare’s free CDN service.

This worked really easily and took about 10 minutes. I signed up, enabled 2FA, and followed the instruction to change my nameservers to Cloudflare’s. I got TLS 1.3 termination immediately, which was cool!

However, since my old shared hosting was insecure, I need to enable Cloudflare’s Flexible SSL mode (see this explanation). In essence, although the connection between User Agent and Cloudflare’s edge was secure, the connection to my shared hosting origin was insecure. There was no complete end-to-end security.

Now you might say that the site doesn’t handle much of importance but that doesn’t matter. For a long list of reasons why security is important regardless of content, check out Troy Hunt’s blog post Here’s Why Your Static Website Needs HTTPS.

Although the migration was smooth, I found some issues with mixed-config warnings while using Flexible SSL. In my haste to fix this I got into some weird URL issue that ultimately meant Cloudflare couldn’t load any images from my origin. Rather than waster time pursuing this, I decided the long term solution would be to migrate.

Rolling my own

So I finally found the time to take a look at rolling my own WordPress hosting. On first inspection running a LAMP-like stack using Caddy seemed a bit daunting. And I want PHP for some other future project, so the thought of changing blogging software was out of the picture.

I decided to go with a vanilla LAMP stack. And I was excited to use Apache HTTP Server because I’d heard a few things about the newish mod_md module. In this case md means Managed Domains. The module provides the means for automatic Let’s Encrypt certificate management. The other bonus was that the I’d shared a glass of wine with the author, Stefan Eissing, in the past. (Stefan also developed the mod_h2 module, which provides HTTP/2 support in Apache).

Now, unfortunately I somehow got completely side tracked during the setup phase of all of this. Rather than using mod_md I ended up using certbot.

The reason why is because I was very excited about Let’s Encrypt wildcard certificates when they were announced in March 2018. The benefit of certificates with wildcarded subdomains is that I can reuse the same one across the various experiments I have in mind this year, without having to go through a Let’s Encrypt dance. this is especially helpful for other pieces of software that have no automatic capabilibty in-built.

Flicking through the documentation, it states:

If you want to obtain a wildcard certificate using Let’s Encrypt’s new ACMEv2 server, you’ll also need to use one of Certbot’s DNS plugins.

Since I was a Cloudflare customer on this domain, I could use a Certbot plugin – certbot-dns-cloudflare.

All-in-all the Certbot process wasn’t too bad. I have a certificate and private key usable in a few contexts, and Certbot is responsible for updating it every 90 days.

I would have liked to get mod_md working. However, I was also a bit lazy and relied on my distro’s packaged Apache and my familiarity with the more conventional Apache config directives. It would be good to find out if the module supports what I ended up doing.

Was it worth it?

At the most superficial level probably no. I was able to fix broken images but they are pretty pointless anyway.

At a more fundamental level, I’d say the migration was worth it. I have the experimental platform and fine grain control that I wanted, while at the same time providing more robust end-to-end security. Furthermore, the VPS is more performant and has more flexibility to manage scaling. When combined with the capabilities of a CDN, I think this blog has the potential to be a lot more web performance happy. However, WebPageTest marks me down in a few areas, there is still work to do…

SpotifyStatusApplet v1.3 Beta 1

FYI: SpotifyStatusApplet was broken by a Spotify API change made in Q3 2018. This page is provided for archive purposes and the download has been removed. More information is available on the Project Page.

A small update to SpotifyStatusApplet has been released as a beta.

This version fixes a critical issue with newer versions of Spotify that prevent the applet from working. Thanks JeffreyO.

It also adds the (much requested) feature of playback control using the remaining soft keys: 2 – previous, 3 – play/pause toggle, 4 – next. Thanks JeffreyO.

More information is available on the Project Page.

SpotifyStatusApplet v1.2 Beta

FYI: SpotifyStatusApplet was broken by a Spotify API change made in Q3 2018. This page is provided for archive purposes and the download has been removed. More information is available on the Project Page.

A small update to SpotifyStatusApplet has been released as a beta. This version adds the ability to toggle on/off the Field titles (Track, Album and Artist) by pressing “soft key 1″, the first key underneath the LCD on most models.

More information is available on the Project Page.

Ode to IBM Rational Rhapsody

Ode to Rhapsody

to the tune of “Comme d’habitude” / “My Way”

Source: Wikipedia

And now,  the end is here
And so I face, the final codegen
My friend, I’ll say it clear
I state my Use Case, I’ll draw a Lifeline
I’ve declared a class that’s pure
I created each and every dependency
And more, much more than this, I did it in Rhapsody

And now,  the end is here
And so I face, the final codegen
My friend, I’ll say it clear
I state my Use Case, I’ll draw a Lifeline
I’ve declared a class that’s pure
I created each and every dependency
And more, much more than this, I did it in Rhapsody

Branches, I’ve merged a few
But then again, too few to mention
I did what I had to do and saw it through without testing
I planned each charted course, each careful step along the activity
And more, much more than this, I did it in Rhapsody

Yes there were times, I’m sure you’ll mention
When I lost my rag, adding functions
But through it all, there is still doubt
Should it be In, InOut or Out
I chose them all, I felt small, I did it in Rhapsody

I’ve waited, I’ve waited and cried
I’ve had my fill, my share of generating
And now, as tears subside, I find it all so amusing
To think, I did all that
And may I sat, not in an efficient way
Oh no, oh no not me, I did it in Rhapsody

For what is a shared pointer, what has it got
If not newed, then it should be nought
To be the thing it truly feels, must be cast dynamically
The model grows, the model slows, I did it in Rhapsody

Yes, it was Rhapsody

P.S. The etymology of Rhapsody leads us back to Greece; rhaptein ‘to stitch’ + ōidē ‘song, ode’.

Antimatter Poster

During university I produced a poster on the topic of Antimatter as part of a Communicating Science module.

I put this up on the web in 2007 and years on, I am seeing traffic to the site driven by a link contained in a Rutgers assignment concerning Information Design. Unfortunately, across the years the poster image had become unavailable…. until now. While the poster was discoverable via a Google search (finding it hosted without permission on other sites, another story) in order to help out those eager students I will host it here permanently.