Moving from GitHub pages to self-hosted

Initially, this blog was hosted with GitHub pages.
It’s a great way to serve static content via Jekyll, but it lacks any traffic analytics if you’re not willing to include it yourself, which I didn’t want to.
So I decided it’s time to move the blog to its own little space in the cloud.
Table of Contents
Advantage of self-hosting
The most significant benefit for me is having rudimentary traffic reports based on server logs without needing to include anything like Google Analytics. So I don’t even need a cookie notice, because the site won’t use any cookies!
Any external hosted source was also removed from the blog (like Google Fonts).
Sure, I lose some information about my users and can’t use a flashy UI like Google Analytics. But I strongly believe the privacy gains for my readers are worth it.
How did I do it?
Where to host
GitHub pages are free, and it’s hard to beat free.
But free often comes with some trade-offs.
If you’re willing to go with almost free as in “a cup of coffee per month”, there are plenty of options for non-high-traffic sites:
- Hetzner Cloud (~2.50 €)
- Linode (~5$)
- Digital Ocean (~5$)
- AWS Lightsail (~$3.50)
At my company, we’re using mostly Hetzner for our server-needs, so I’ve chosen the smallest Hetzner Cloud server at the time, the CX11.
With 1 vCPU, 2 GB RAM, 20 GB SSD, and 20 TB traffic, it’s more than capable of hosting my small, static-content blog.
Docker
Containers can make things easier, so first, we need Docker:
apt-get remove -y docker docker-engine docker.io
apt-get install -y —no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
curl -fsSL [https://download.docker.com/linux/ubuntu/gpg](https://download.docker.com/linux/ubuntu/gpg) | apt-key add —
add-apt-repository "deb \[arch=amd64\] [https://download.docker.com/linux/ubuntu](https://download.docker.com/linux/ubuntu) $(lsb_release -cs) stable"
apt-get updateapt-get install -y docker-ceusermod -aG docker $(id -u -n)
Dedicated user
Due to docker images using a dedicated user instead of root
, we need an additional user that is allowed to use docker
and sudo
adduser site
usermod -aG docker,sudo site
Docker Compose
To handle our 2 containers at the same time we use [docker-compose](https://docs.docker.com/compose/)
.
curl -L https://github.com/docker/compose/releases/download/1.24.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
Create a file docker-compose.yaml
with the following content:
version: '3'
services:
nginx:
image: nginx:1.17.3
container_name: nginx-site
ports:
\- "0.0.0.0:80:80"
\- "0.0.0.0:443:443"
volumes:
\- ~/docker-data/nginx/conf.d/:/etc/nginx/conf.d:ro
\- ~/docker-data/nginx/logs:/var/log/nginx
\- ~/docker-data/nginx/ssl:/ssl:ro
\- ~/docker-data/nginx/site:/site:ro
\- ~/docker-data/certbot/conf:/etc/letsencrypt
\- ~/docker-data/certbot/www:/var/www/certbot
restart: unless-stopped certbot:
image: certbot/certbot
container_name: certbot
volumes:
\- ~/docker-data/certbot/conf:/etc/letsencrypt
\- ~/docker-data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
restart: unless-stopped
What is happening here?
We defined 2 services, an Nginx web server, and a Certbot for creating/renewing our SSL certificate.
The mapped volumes should be created before starting the containers, or the docker daemon will create them as root
, which might lead to permission errors later on.
Nginx
Create ~/docker-data/nginx/conf.d/site.conf
and replace <your domain>
:
# Enabling gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
# Redirect http to https, also certbot
server {
listen \[::\]:80;
listen 80;
server_name <your domain> www.<your domain>;
location ^~ /.well-known/acme-challenge/ {
allow all;
root /var/www/certbot;
try_files $uri =404;
break;
}
location / {
return 301 https://<your domain>$request_uri;
}
}
We configured port 80 to accept 2 locations:
/.well-known/acme-challenge
: needed for certbot/
: redirect non-https to https
The actual https-server isn’t configured yet, because Nginx won’t start without the actual SSL certificate, but we need Nginx up and running to request them.
Start the containers:
docker-compose up -d
SSL
To request an SSL certificate, we need to run certbot:
$ docker exec -it certbot certbot certonly --renew-by-default
Use the follwing answers:
- webroot (2)
- your domain names (without and with www)
- /var/www/certbot
Now you have an SSL certificate!
Add the following location into your Nginx config file and replace <your domain>
:
Redirect https www to non-www
server {
listen 443 ssl;
listen \[::\]:443 ssl;
server_name www.<your domain>;
ssl_certificate /etc/letsencrypt/live/<your domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<your domain>/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
return 301 https://<your domain>$request_uri;
}
}
# actual https site
server {
listen 443 ssl http2 default_server;
listen \[::\]:443 ssl http2 default_server;
server_name <your domain>;
ssl_certificate /etc/letsencrypt/live/<your domain>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<your domain>/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
location / {
index index.html;
root /site;
}
}
Restart the containers:
docker-compose down && docker-compose up -d
Auto-deploy on git push
One of the advantages of GitHub pages is the auto-deployment as soon as you push something to the repository.
Let’s recreate this for our server!
Makefile
To simplify the different steps I’ve created a Makefile, which incidentally requires you to have make
installed:
JEKYLL := 3.8.6
DEPLOY_DIR := ../docker-data/nginx/site.PHONY: clean
clean:
rm -rf _site
rm -rf .jekyll-cache.PHONY: build
build:
docker run --rm \
-v "${PWD}:/srv/jekyll" \
-v "${PWD}/vendor/bundle:/usr/local/bundle" \
jekyll/jekyll:${JEKYLL} \
jekyll build.PHONY: serve
serve: clean _serve.PHONY: _serve
_serve:
docker run --rm \
-p "4000:4000" \
-v "${PWD}:/srv/jekyll" \
-v "${PWD}/vendor/bundle:/usr/local/bundle" \
jekyll/jekyll:${JEKYLL} \
jekyll serve.PHONY: copy
copy:
rsync -avP ./\_site/* ${DEPLOY\_DIR}.PHONY: deploy
deploy: clean build copy
Make sure to use tabs for indention, not spaces.
Just a wrapper around jekyll
with added rsync
to the correct location.
A simple make deploy
is now all we need to build and deploy our website.
Git hook
To not call git pull -r
and make deploy
manually on the server every time we have some changes we need to allow the repository to receive data even if it’s not a bare repository:
git config receive.denyCurrentBranch updateInstead
This way, the repository will update the working directory on receiving a push instead of denying it.
Adding a post-receive
hook will trigger the make deploy
, so create <repo path>/.git/hooks/post-receive
with a small script inside:
#!/bin/sh
make -C $GIT_DIR/../ deploy
docker exec nginx-site nginx -s reload
Now every time you push to the repository (e.g. as an additional origin) the site will be automatically generated and deployed.
Log rotation
Without Google Analytics we can analyze our log files, so lets rotate them:
apt install logrotate
Create /etc/logrotate.d/nginx-site
with the following content:
/home/site/docker-data/nginx/logs/*.log {
monthly
missingok
rotate 12
compress
delaycompress
notifempty
sharedscripts
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
logrotate
scripts will be run automatically by cron
, and with this config, our logs will be rotated monthly, and 12 rotations will be kept. You could also choose to use weekly
or daily
to increase granularity.
That’s it… enjoy your self-hosted blog!