One of the main goals I have for this goal is to make it as friction-less as possible to write. In an ideal world, I would be able to open my text editor (I use nvim and Obsidian), write some content and then, with a single command, publish.

I looked at solutions like Ghost and WordPress, but they are both, well, too much. Or rather, they are designed, as per Ghost’s tagline, for “the creator economy”. That is, they include analytics, membership, revenue charts, etc. etc. I want none of that.

I just want a static page with a bunch of html. Zero bells and whistles.

So the next option was to either build something with a static site generator like Pelican (which is what I’m using now!) or host the blog on a site like Bear Blog or Writefreely. The latter two are pretty cool, too! Bear Blog is the essence of simplicity (and part of the inspiration for the design of this blog), while Writefreely’s integration with the Fediverse is awesome. Unfortunately, the design is pretty set in stone, hosting images is more difficult, and to publish one has to place their text on a browser.

For context, the way this blog was deployed before was through Cloudflare Pages. The html and css files are generated on my computer and then I had to manually upload a folder with all the html to the project page. Far too tedious, in my opinion.

What I’ve done instead is to continue to generate the site with Pelican, but instead I host it on a VM in my server with nginx. This sits behind another nginx instance that acts as a reverse proxy for requests made through Cloudflare. Here is how I did it:

Configuring NGINX

Config file

To serve the files I set up a VM with Debian 11 as an OS (in the future I will move this into a Docker container) and a single user in addition to root. I also installed nginx from the Debian repository by running sudo apt install nginx.

Then I created a configuration file for my site:

cd /etc/nginx/sites-available/
sudo nvim terminus

In the terminus file, I included a very simple config:

server {
        listen 80;

        root /home/{YOUR_USER}/terminus/;
        server_name blog.terminus.earth;

        location / {
                try_files $uri $uri/ =404;
        }
    
        location /theme {
        }
}

What this does is create a server that listens on port 80. That is, it is only an HTTP server (the HTTPS and certificates are all handled by my reverse proxy).

The next line defines where the root of my website is located: in this case it is at /home/{MY_USER}/terminus/. The location / block establishes that the server should try looking for files at the root of the file structure and then for sub directories, otherwise return a 404 Error.

When I first deployed this configuration, however, the site would not load any of my CSS files (which are at ..terminus/theme/styles.css). I was very confused, too, because on my browser all I would see was a 503 Timeout error. Turns out that even with $uri/ set up as part of the location, the sub folder was inaccessible.

I realized what the issue was once I read the NGINX logs with sudo cat /var/log/nginx/error.log:

[error] 1983#1983: *19 directory index of
"/home/{MY_USER}/terminus/theme/" is forbidden, client: 192.168.0.150,
server: 192.168.0.30, request: "GET /theme/ HTTP/1.1", host: 
"blog.terminus.earth"

Turns out that while I told NGINX to look into the directory, the clients were not given permission to go there (confusing, too, since this is a forbidden error, not necessarily a timeout!). The solution was to add the second location line to my configuration file to explicitly allow clients access to the endpoints at theme/.

Enable the site

The configuration file here lives at /etc/nginx/sites-available/. But we want to make sure the site is enabled. As a result, I ran the following commands:

sudo rm /etc/nginx/sites-enabled/default
sudo ln -s /etc/nginx/sites-available/terminus /etc/nginx/sites-enabled/terminus

This removes the default site from NGINX and uses a symlink to ensure the available site and the enabled side configs are the exact same.

Then, to enable the server I simply ran:

sudo systemctl start nginx
sudo systemctl enable nginx

Accessing site data

I upload my html files to my server using rsync. This works well, except for one problem: the files themselves are owned by my user, but the user on the server that reads the files is www-data. With no further changes, NGINX was unable to read my files and was therefore displaying broken stuff on my browser.

My immediate thought was, “ah! Ok so I can just chown -R the entire directory and change ownership to www-data”. And this worked, until I tried to upload more files.

Obviously I didn’t want to change the ownership of those files every time I wrote something. The solution I came up with was to make my user a member of the www-data group:

sudo usermod -g www-data {MY_USER}
sudo chmod 644 terminus/*

Once this was in place, I was able to immediately write this post, rsync it to my server, and have it available within seconds.