How pragmaticweb.dev Was Made

I have never hosted or authored a site on the internet myself, so writing and posting is still very new for me. But I have some experience with all the technical aspects of it and wanted to share a bit about how I made this site, why I chose the tools I did and how I set it all up. I also plan to go into more depth about certain features in future posts, which I hopefully won’t forget to link here.

Overview

I am a big fan of static site generators because of their simplicity and performance benefits compared to a full-fledged CMS. The one I have been following for the past years is Eleventy, mainly because of its flexibility and openess. There is a lot of pick-and-choose going on when developing with Eleventy, which makes it suitable for a wide variety of even existing projects to integrate. That is why I built this site with Eleventy too, and in that process I discovered many more abilities and features of Eleventy that in most cases positively surprised me.

The accompanying tech stack is kept very simple with PostCSS build steps for basic tasks like autoprefixing and minification and vanilla JavaScript for the few purposes I need it for. Many of the things I classically would need JS for could be solved by Eleventy’s configuration, the prime example being the syntax highlighting, which is generated during build and therefore baked into the HTML directly instead of it having to be initialized via JS on your/the client’s end.

In addition, I tried my best to implement some of the more technical features of web hosting and authoring in a relatively simple way. Things like automatically generated responsive images, asset cache busting, automatic deployment pipeline straight from my Bitbucket repository, a site search via Pagefind and a few more. I want to go into more detail on each of them, primarily to have it documented for myself, but also for others who might want to implement these features on their own site but don’t want to fiddle with it too much.

Build Steps

I was never really convinced of complex and elaborate build tools if I’m being completely honest. Things like webpack or gulp, while they definitely can be used in a simplistic, toned-down way, were always a bit overkill for most use-cases regarding website in my opinion. Therefore, I want to go the route of npm scripts using the CLI tools that most npm packages already come with. As a result, my “build steps” are 1-2 commands per asset type (HTML, CSS, JS), one more command to execute those in parallel and a few auxiliary commands, e.g. for clean-up or cache busting. In my opinion this is a very maintainable and readable build setup that does its job perfectly well for websites, while also being easily extensible for specific scenarios. My npm scripts at the time of first publishing this site look like this:

"scripts": {
    "build": "cross-env NODE_ENV=production npm-run-all --parallel build:*",
    "build:css": "postcss ./src/css/style.css --output ./_site/assets/css/style.696c25ff17c0ed96551a9665c9531962.css --map",
    "build:js": "esbuild ./src/js/script.js --outfile=./_site/assets/js/script.f585e6edda046a4f764929dee6de084c.js --bundle --sourcemap --minify",
    "build:templates": "eleventy --quiet",
    "cache-busting": "cross-env NODE_ENV=production node ./cache-busting.js",
    "clean": "shx rm -rf ./_site",
    "dev": "cross-env NODE_ENV=development npm-run-all --parallel dev:*",
    "dev:css": "postcss ./src/css/style.css --output ./_site/assets/css/style.696c25ff17c0ed96551a9665c9531962.css --map --watch",
    "dev:js": "esbuild ./src/js/script.js --outfile=./_site/assets/js/script.f585e6edda046a4f764929dee6de084c.js --bundle --sourcemap --watch",
    "dev:lint": "onchange -i ./src/js/* -- eslint ./src/js/**",
    "dev:templates": "eleventy --incremental --serve",
    "prebuild": "npm run clean",
    "postbuild": "npx pagefind && npm run cache-busting",
    "predev": "npm run clean && npm run build:templates && npx pagefind"
  }

The configuration for commands like the CSS or JS commands is done in their respective config files (e.g. postcss.config.js or .eslintrc.js), which keeps the scripts section clean and readable.

I plan to write in detail about my build setup, the packages I used and the development and authoring experience I went for in a future post, which I will link here once it’s available.

Features

I would like to highlight some features of this site that I think are most useful as they each streamline a process which is often repeated during website hosting or authoring.

Automatic Image Generation

Automatically creating images in different sizes and formats for different viewport sizes and having the HTML markup for it created on-build is very convenient and a topic many others have written about already. Fortunately for me, Eleventy already offers an image generation plugin that does the hard work for you, and it’s configurable to suit almost any kind of setup. I started with the plugin and this great post by Aleksandr Hovhannisyan. Essentially I followed the instructions with a few modifications of my own and can now easily output responsive images with a shortcode in my markdown files, like this one:

An abstract artwork of blue and purple effects with a neon glow.

Shortcode used for the above image:

{% image src="186336.jpg", alt="An abstract artwork of blue and purple effects with a neon glow." %}

Here too I plan to write in more detail about my specific setup and the modifications I made.

Cache Busting

When deploying and especially when updating a website, it is important to properly “mark” assets in some way to tell browsers if they need to download a new version of that asset or can keep the cached one (one step before that it is important that the server is configured in a way that assets can potentially be cached in the first place).

Most CMS will do this automatically by appending a query string to the actual asset file name. Build tools like webpack or frameworks like VueJS will also do this automatically, either also with a query string or by hashing the file name depending on the file’s content, i.e. the hash changes whenever the content changes, triggering the browser to download the new file from the server.

For my SSG setup I wanted to have a simple way (kind of the theme of this site, huh) to integrate cache busting in my build process, so whenever I deploy an updated version, those files that change get something like that hash injected into their file name. Everything had to be automated, and I definitely did not want to deal with constantly changing file names in my actual templates.

I came across Eleventy event hooks and was convinced that it was the way to go. With a eleventy.after event hook, I could piggyback onto whatever Eleventy was doing to files and do my cache-busting-thing afterwards. And while that would have been the perfect solution in most cases, it didn’t work for me because of my specific build step setup. Specifically, because I use Pagefind for my search, and Pagefind runs after Eleventy is done (because Pagefind needs the actual content to build its search index), I have to wait until Pagefind is done to run my cache busting steps. Otherwise, all of Pagefind’s assets and references to it would not be included in the cache busting process, which might lead to issues when it’s updated.

With that in mind, I could still implement it largely the same way I could have done within an Eleventy event hook, but instead I set it up as an independent build step that I could trigger in my npm scripts. That way I can trigger it as the very last step after Eleventy and Pagefind finished.

The cache busting script itself goes through all assets (only CSS and JS for now, but I might add images in the future) and generates an md5 hash depending on the file’s content. That hash is injected into the file’s name in the pattern of <file name>.<hash>.<extension>. Finally, all references to those assets in the content files (HTML, XML, JSON etc.) are updated to include the appropriate hash too.

I will share the setup and script in more detail in a future post, which I will link here once it’s available.

Automatic Deployment

Whenever I write a new post or update the CSS a bit, add some JS functionality or any of the kind, I would like to avoid having to manually upload all the files to my server for the site to be updated. I know that Netlify and other integrated CI/CD tools have this functionality out of the box, but I thought that it shouldn’t be that difficult to implement it for my own site in my own repository.

Enter: Bitbucket Pipelines.

By setting up a pipeline in my repository and dedicating a git branch to deployment, I could automate the deployment whenever I push to said branch. With the site being relatively light-weight and because I don’t see myself posting daily or even every few days, I am miles away from exhausting any free limits in terms of number of builds or build minutes.

Bitbucket (or Atlassian in this case) also offers pre-made packages for specific functionalities, among them a package for using rsync. This means I could execute all my build steps as needed within the pipeline, cache the output for the rsync package to access and move it over to the correct path on my server. So far the tests have gone well and within a minute or two after I push to my production branch, the website shows the new content.

All that’s needed for it to work is a bitbucket-pipelines.yml file in your repository with a configuration similar to this:

image: node:16

pipelines:
  branches:
    production:
      - step:
          name: 'Build Site and Search Index'
          caches:
            - node
          script:
            - npm install
            - npm run build
          artifacts:
            - _site/**
      - step:
          name: 'Push Site to Server'
          caches:
            - docker
          services:
            - docker
          script:
            - pipe: atlassian/rsync-deploy:0.12.0
              variables:
                USER: $RSYNC_USER
                SERVER: $RSYNC_SERVER
                REMOTE_PATH: $RSYNC_REMOTE_PATH
                LOCAL_PATH: $RSYNC_LOCAL_PATH
                SSH_PORT: $RSYNC_SSH_PORT

You also need to set the rsync variables like username, port and remote path, which you can either do in the configuration file itself, or (for more safety) save the values as repository variables in your Bitbucket repository settings. Within those repository settings you can also save the SSH key that will be used to authenticate your rsync pipeline command to the server you want to copy the files to.

The artifacts option indicates files from that build step that are used in following build steps. Since I want to use the site I built in that step, I set it to the output path of my Eleventy configuration. The caches options lets you define specific parts of your pipeline to be cached for future builds. In my case, I wanted to avoid having to redownload the docker images for Node and Rsync for every build, so I listed them here. You can also define your custom caches, but in my case the pre-defined cache options were sufficient.

What’s More

I’m still working on this post and will add more about the following topics in the future:

This post is Tagged with  the following keywords: