Deploying a static site to Linode Object Storage w/ auto-renewing SSL
Do you have a website that is built using a static site generator and are hosting it somewhere, but want to migrate it to Linode Object Storage while using a Let’s Encrypt SSL certificiate? You’ve come to the right place.
Overview
- Change DNS Entries
- Create an Object Storage Access Key
- Get a Linode API Token
- Automate it
Change DNS Entries
You need to modify the DNS entry for your domain name. You need a CNAME
entry at @
that points to blog.lexi.sh.website-us-east-1.linodeobjects.com.
. Replace blog.lexi.sh
with your full domain name, and us-east-1
with your region if you are in another region.
Create an Object Storage Access
https://cloud.linode.com/object-storage/access-keys
Create an access key for your bucket. Ideally, this would be as limited in scope as possible: It needs read/write to Object Storage, and to your bucket. Make note of your Access Key and Secret Key for later.
Get a Linode API Token
https://cloud.linode.com/profile/tokens
You need to use the linode CLI or API to actually set the SSL certificates for your object storage. Make note of the key.
Automate It
Now we’re ready to actually run some things. I will show my automation using Github actions, but I will annotate it such that you should be able to use any deployment platform you want– or even do it manually.
I have four files, located at the following locations:
.github/.s3cfg
.github/authenticator.sh
(This has executable permissions).github/cleanup.sh
(This has executable permissions).github/workflows/deploy.yml
.github/.s3cfg
[default]
access_key = XXXXXXXXXXXXXXXXXXXXX
bucket_location = US
host_base = us-east-1.linodeobjects.com
host_bucket = %(bucket)s.us-east-1.linodeobjects.com
secret_key = {{REPLACE_ME}}
website_endpoint = http://%(bucket)s.website-us-east-1.linodeobjects.com
website_error = 404.html
website_index = index.html
- Replace the
access_key
with your Object Storage Access Key you received earlier. - Replace the
host_base
,host_bucket
, andwebsite_endpoint
us-east-1
with your region if it’s different. - Do NOT touch the
secret_key
if you’re doing this with automation– we will be committing this file. If you’re doing this locally, set that to be your secret key from earlier.
.github/authenticator.sh
This is a shell script that Certbot calls after it issues the challenge for us to prove that we are in charge of the domain we say we are.
#!/bin/bash
echo $CERTBOT_VALIDATION > $CERTBOT_TOKEN
s3cmd put --no-mime-magic --acl-public $CERTBOT_TOKEN s3://$CERTBOT_DOMAIN/.well-known/acme-challenge/$CERTBOT_TOKEN
No changes should be necessary here. All variables are set by Certbot.
.github/cleanup.sh
Similarly, this file gets called at the end of certbot processing. We will clean up the file we created above.
#!/bin/bash
s3cmd rm s3://$CERTBOT_DOMAIN/.well-known/acme-challenge/$CERTBOT_TOKEN
.github/workflows/deploy.yml
The meat and bones. I will paste the full file first, and then I will walk through each step so you understand what we’re doing.
name: 'Deploy Hugo to linode object storage'
on:
push:
branches:
- main
pull_request:
types: [opened, reopened,synchronize]
schedule:
- cron: "0 0 * * 0" # Runs every Sunday at midnight UTC.
jobs:
deploy:
runs-on: ubuntu-latest
env:
LINODE_CLI_TOKEN: ${{ secrets.LINODE_API_KEY }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.11.3
- name: Set vars from branches
uses: iamtheyammer/branch-env-vars@v1.2.1
with:
BUCKET_NAME: |
main:blog.lexi.sh
!default:dev.lexi.sh
FLAGS: |
main:-e production
!default:-DF -e dev
- id: install-hugo
run: |
HUGO_DOWNLOAD=hugo_extended_0.112.0_Linux-64bit.tar.gz
wget https://github.com/gohugoio/hugo/releases/download/v0.112.0/${HUGO_DOWNLOAD}
tar xvzf ${HUGO_DOWNLOAD} hugo
mv hugo $HOME/hugo
shell: bash
- id: download-themes
run: |
git submodule init
git submodule update
shell: bash
- id: install-pip-things
run: |
sudo apt install -y certbot
pip install s3cmd
sudo pip install s3cmd
pip3 install linode-cli
pip3 install boto
- id: write-s3cmd
run: |
sed -i 's/{{REPLACE_ME}}/${{ secrets.LINODE_SECRET }}/g' .github/.s3cfg
cp .github/.s3cfg $HOME/.s3cfg
sudo cp .github/.s3cfg /root/.s3cfg
- id: build
run: |
$HOME/hugo --theme hugo-clarity $FLAGS
shell: bash
- id: deploy-to-linode
run: |
s3cmd mb $BUCKET_NAME
s3cmd ws-create --ws-index=index.html --ws-error=404.html s3://$BUCKET_NAME
s3cmd --no-mime-magic --acl-public --delete-removed --delete-after sync public/ s3://$BUCKET_NAME
shell: bash
- id: linode-cert
run: |
sudo certbot certonly --agree-tos --email myemail@address.com --manual --manual-auth-hook
.github/authenticator.sh --manual-cleanup-hook .github/cleanup.sh -d $BUCKET_NAME -n -v
linode-cli object-storage ssl-delete \
us-east-1 $BUCKET_NAME
linode-cli object-storage ssl-upload \
us-east-1 $BUCKET_NAME \
--certificate "$(sudo cat /etc/letsencrypt/live/$BUCKET_NAME/fullchain.pem)" \
--private_key "$(sudo cat /etc/letsencrypt/live/$BUCKET_NAME/privkey.pem)"
shell: bash
continue-on-error: true
Phew. Okay, let’s start:
name: 'Deploy Hugo to linode object storage'
on:
push:
branches:
- main
pull_request:
types: [opened,reopened,synchronize]
schedule:
- cron: "0 0 * * 0" # Runs every Sunday at midnight UTC.
For Github actions, this specifies that this workflow will run every time a commit to main
happens, a PR to main
gets opened or re-opened, when a branch with a PR to main
gets a new commit pushed to it, and also every Sunday night at Midnight so that the certificate doesn’t go stale even if we don’t commit anything for a while.
jobs:
deploy:
runs-on: ubuntu-latest
env:
LINODE_CLI_TOKEN: ${{ secrets.LINODE_API_KEY }}
This is a docker container that is based on ubuntu. I have addede my API token as a secret to Github, and am referencing it here– so LINODE_CLI_TOKEN
will be an environment variable for every step.
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.11.3
Boilerplate things: Check this branch out in the working directory, and then also get python 3.11.3 installed.
- name: Set vars from branches
uses: iamtheyammer/branch-env-vars@v1.2.1
with:
BUCKET_NAME: |
main:blog.lexi.sh
!default:dev.lexi.sh
FLAGS: |
main:-e production
!default:-DF -e dev
I have two variables used later: $BUCKET_NAME
and $FLAGS
. Those variables are set to different things, determined on the branch name that’s being pulled. For example, blog.lexi.sh
if it’s main
, dev.lexi.sh
otherwise (Which is my dev site with my drafts).
- id: install-hugo
run: |
HUGO_DOWNLOAD=hugo_extended_0.112.0_Linux-64bit.tar.gz
wget https://github.com/gohugoio/hugo/releases/download/v0.112.0/${HUGO_DOWNLOAD}
tar xvzf ${HUGO_DOWNLOAD} hugo
mv hugo $HOME/hugo
shell: bash
- id: download-themes
run: |
git submodule init
git submodule update
shell: bash
This installs hugo, and then also does a git submodule init
and git submodule update
to make sure we have the correct version of whatever theme we’re using in hugo.
- id: install-pip-things
run: |
sudo apt install -y certbot
pip install s3cmd
sudo pip install s3cmd
pip3 install linode-cli
pip3 install boto
This installs some things we need. certbot, s3cmd, boto, and linode-cli. It also installs s3cmd for the root user, which we’ll need for certbot callbacks.
- id: write-s3cmd
run: |
sed -i 's/{{REPLACE_ME}}/${{ secrets.LINODE_SECRET }}/g' .github/.s3cfg
cp .github/.s3cfg $HOME/.s3cfg
sudo cp .github/.s3cfg /root/.s3cfg
This replaces the {{REPLACE_ME}}
in the committed s3cfg
file with the secret key from Object Storage we got earlier– I set mine up as a secret in Github Actions.
It then copies the corrected file both to the home user’s .s3cfg
file, as well as the root user’s. Again, for certbot purposes.
- id: build
run: |
$HOME/hugo --theme hugo-clarity $FLAGS
shell: bash
Builds hugo!
- id: deploy-to-linode
run: |
s3cmd mb $BUCKET_NAME
s3cmd ws-create --ws-index=index.html --ws-error=404.html s3://$BUCKET_NAME
s3cmd --no-mime-magic --acl-public --delete-removed --delete-after sync public/ s3://$BUCKET_NAME
shell: bash
This does three things with s3cmd:
- Makes the bucket if it doesn’t exist
- Creates the website metadata of the bucket, with
index.html
as the entry page and404.html
as the error page. - Pushes the statically generated site (
public/
in Hugo) up to the bucket with public-read ACLs on all the files. It also does a truesync
, so it will remove anything in the bucket that does not exist locally.
- id: linode-cert
run: |
sudo certbot certonly --agree-tos --email myemail@address.com --manual --manual-auth-hook .github/authenticator.sh --manual-cleanup-hook .github/cleanup.sh -d $BUCKET_NAME -n -v
linode-cli object-storage ssl-delete \
us-east-1 $BUCKET_NAME
linode-cli object-storage ssl-upload \
us-east-1 $BUCKET_NAME \
--certificate "$(sudo cat /etc/letsencrypt/live/$BUCKET_NAME/fullchain.pem)" \
--private_key "$(sudo cat /etc/letsencrypt/live/$BUCKET_NAME/privkey.pem)"
shell: bash
continue-on-error: true
This runs certbot. It will:
- Register your email if you’ve never used it before
- Create a challenge for your domain.
- Call
authenticator.sh
with a bunch of variables set– which writes the challenge to.well-known
in the bucket in the way certbot needs - Generate public and private keys in
/etc/
- Delete the challenge from
.well-known
, by callingcleanup.sh
And finally, it uses the linode-cli
to delete the existing cert, and upload the new certs. Note that this is non-transactional, so both:
- If something goes wrong in the upload, you’ll be left with no cert, and
- There will be a small amount of time where you’ll be left without a cert on your site.
Yes, this sucks. I wish linode would give us a single “replace” option, but this is what we have to do for now.
This also doesn’t fail your build if this fails– because acme stops you from requesting more that 5 certs a week for the same domain. TODO: Add a check here to see if it’s necessary and just not do it, so that we get failures from the linode-cli correctly bubbled up.
And that’s it! You should be able to see your site with ssl up at your domain. This site sure is!