Close

Let's Do Encrypt; And Why Not?

A project log for Dockerize All the Things

hard drive crash! recover, revive, and re-engineer server using docker contained services this time.

ziggurat29ziggurat29 11/16/2020 at 19:190 Comments

Summary

Here I modify the nginx configuration to also support TLS, and redirect unencrypted requests to our encrypted version.  I also modify the base nginx image to include the Let's Encrypt certbot tool to keep the certificates renewed automatically.

Deets

In this modern world, who does plaintext HTTP anymore except in special circumstances?  Also, since I can now get free certificates from Let's Encrypt, why would I not want to upgrade to TLS?

Let's Encrypt

Let's Encrypt is a non-profit set up to encourage the use of encryption on the Internet by providing free TLS certificates.  It uses an automated issuance process, and so has some limitations (you can't get OV or EV certs), but it is useful for the vast majority of use cases.

The automated issuance is done with a tool called 'certbot'.  It is a python application that handles generating of keys, supplying a 'challenge' to prove domain ownership, generating certificate signing requests, and retrieval of issued certificates.  It also can handle renewal, since the Let's Encrypt certificates are valid for 90 days only.

There are several modes of proving domain ownership, and the two most common are 'HTTP' and 'DNS'.  In the HTTP challenge (elsewhere called 'webroot') proof of ownership of the domain is established by the presence of well-known resources on a server than can be accessed via the DNS name being challenged.  In the DNS challenge, some special TXT records are used to prove domain ownership.

The DNS challenge gives more options, because that is the only way that you can get a 'wildcard' certificate issued -- they won't issue such when using the webroot challenge.  However, the challenge with that challenge is that the certbot needs to understand how to manipulate the DNS records.  It does this via 'plugins' that you can install that understand various APIs, however the DNS providers I have (GoDaddy, Google Domains, no-ip.com) do not have such APIs, and hence I will have to use the 'webroot' mechanism.  This isn't too bad, because even though I can't get a wildcard certificate, I can get a certificate with multiple server names, such as 'example.com', 'www.example.com', 'ftp.example.com', etc.

Nginx

Setting up TLS on nginx is not particularly difficult; you specify another 'server' block, listen on 443 with the TLS protocol, and specify various TLS options.  There is one gotcha though:  it wants the certificates to exist upon startup.  And I haven't gotten any yet!  So, chickens and eggs.  As such, I'm going to have to get the certbot working first to 'prime the pump', and then reconfigure nginx for HTTPS afterwards.

Certbot

My original thought was to get certbot running in yet another containers.  I still think this is a plausible idea, but there were two things that nagged me:

  1. when renewing the certificate, the nginx needs to be told to 'reload' so that it will pick up the new cert.  It is not currently clear to me how to do that easily container-to-container (I have no doubt there are some clever ways, and maybe even orthodox ones, but I don't know of them yet.)
  2. it vexes me to have a container running just to support a cron job that runs once a day.  Maybe I'm being miserly with system resources, but it seems a waste, and this Pi3 just has 1 GB ram.

So, I decided to not run certbot in a separate container, but rather derive a new image from the curated 'nginx' that also includes certbot inside it, and has a cron job to test for renewal each day, and cause nginx to reload if needed.  If nothing else, it's practice building a derivative docker image.  The first time I'll build one of our own!

Dockerfile

Creating new docker images involves specifying what's going to be inside and what code to run when it starts up.  As mentioned before, things are usually run as conventional processes rather than daemons in the docker scenario, because the daemonic aspect is being done via the container, rather than the application in the container.  The file that specifies all this stuff uses the well-known name 'Dockerfile' (case-sensitive!) (you can override this default if you want, but why bother?).

In the Dockerfile, you specify what is the base image, perhaps add some things like copying files into the image from the host, or running tools to install needed packages.  Then you specify what is to be automatically run.  There are myriad options; the normative reference is https://docs.docker.com/engine/reference/builder/.  I will cover only what is relevant here.  Incidentally, for the images you find on Dockerhub, you can typically also see the Dockerfile that was used to create it.  I found this very helpful and cut-and-pasted things from other images which struck my fancy.

I created a working directory hierarchy on my datadrive to develop these Dockerfiles.  In this case one at /mnt/datadrive/srv/docker/www/Dockerfile

FROM nginx:alpine

RUN set -x \
#get the certbot installed
    && apk add --no-cache certbot \
    && apk add --no-cache certbot-nginx \
#forward log to docker log collector
    && mkdir -p /var/log/letsencrypt \
    && ln -sf /dev/stdout /var/log/letsencrypt/letsencrypt.log \
#get the crontab entry added
    && { crontab -l | sed '/^$/d' ; \
        printf '0       1       *       *       *       certbot renew --quiet && nginx -s reload\n\n' ; } \
        | crontab -

#these were copied from the nginx:alpine Dockerfile; I think I need to do this but not really certain
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]

The FROM directive specified nginx:alpine as the base image, and at the bottom were the startup things that I simply copied from the original Dockerfile used to create 'nginx:alpine', since I still want it to run nginx.

The stuff in the middle is what installs certbot and setup up the cron job to do renewals.  This is done via a RUN directive that installs the certbot, sets up for logging in a way compatible with docker, and then manipulates the crontab.

One thing about RUN (and also COPY) is that each invocation creates a new 'layer' in the virtual filesystem of the image.  Consequently it is a common design pattern to concatenate commands with the '&&' shell operator into one giant RUN directive.  This way there is only one new layer created.  But feel free to use multiple -- sometimes I find that handy when first developing the file, then when it is working like I want, I come back and combine them into the concatenated form.

Another common pattern is the use of '--no-cache' when doing the 'apk add' (relevant for Alpine Linux).  This avoids taking up filesystem space with cached packages that will never be used.

The use of 'set -x' causes the various RUN commands to be echoed back out when executed, and this is handy to see when building the image, but otherwise is not required.

The crontab part was a little tricky for me to get right.  I simply needed to append a line of text fixed to the crontab, but this was complicated for several reasons:

  1. crontab is picky about line endings
  2. crontab is better edited with the crontab tool, which will operate on the correct crontab for your platform
  3. when using crontab to update the underlying file, it replaces the entire file.  There isn't an 'append' option.

So, I used 'crontab -l' to spew out the current crontab, and then sed to remove stray end-of-lines.  Then I use a shell command group (between '{' and '}') to cause that first bit to be run to completion, then a simple 'printf' to spew the new crontab entry with two newlines.  All of that result gets piped into 'crontab -', which will replace the crontab with this new content.

The creation of the log path for certbot and the symlink for the logfile will cause the certbot log lines to be collected by docker.

Once you've got the Dockerfile done, you build it with docker:

docker image build -t nginx-certbot .

 Ideally this will complete successfully.  Then you can see it with 'docker image ls' in your local registry:

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx-certbot       latest              2d7ad5eed815        3 days ago          83.1MB
php                 fpm-alpine          31b8f6ccf74b        11 days ago         71.5MB
nginx               alpine              3acb9f62dd35        2 weeks ago         20.5MB
alpine              latest              2e77e061c27f        2 weeks ago         5.32MB

So, I will eventually update our /etc/docker/compose/myservices/docker-compose.yml file to refer to this image instead of the stock nginx:alpine image.  But first I need to prime the pump with the certificates, so I don't yet stop the existing running web server for this step.

Priming the Pump

I need to run certbot at least once to get the initial certificates.  I will do this by running our new image with 'sh', and this will prevent nginx from running within it.  I will also mount some more paths.

Certbot tends to like to put stuff in /etc/letsencrypt.  So I'll create another directory on the datadrive:

mkdir -p /mnt/datadrive/srv/config/certbot/etc/letsencrypt

and I'll mount that into this image (and eventually into the www image as well).

#run it interactively so I can generate the initial certificates
docker run -it --rm --name certbottest \
    --mount 'type=bind,src=/mnt/datadrive/srv/config/certbot/etc/letsencrypt,dst=/etc/letsencrypt' \
    --mount 'type=bind,src=/mnt/datadrive/srv/data/www,dst=/srv/www' \
    nginx-certbot sh

and once running inside that container, I can generate our original certificate:

#within the container, generate certificates
certbot certonly --webroot --agree-tos -n --email person@whatever.com \
    -d example.com \
    -w /srv/www/vhosts/example.com --dry-run

The email person@whatever.com should be changed -- it is used to receive notifications from Let's Encrypt about impending expiration.  Also the domain -d should be changed to whatever is relevant in your case, and then also -w should be similarly changed to whatever is actually your web's root dir.

This will do some crypto stuff and generate a well-known resource in your web root, and then contact Let's Encrypt's backend.  If that backend can reach the well-known challenge resource via the domain name you have provided, then a certificate will be issued.

I ran the command specifying '--dry-run', so I aren't really going to get a certificate this time.  This is for debugging that you've got everything set up correctly.  If it is correct, then run it once more, but without --dry-run, and you will get your certificate for real!  It will be placed in a directory '/etc/letsencrypt/live/example.com' (Obviously the last component changed to your actual domain.)  The important files in here are:

The first is the public data (certificate chain), and the second is private (the private key).  I now can start working on our nginx configuration.  But before that, here are a comple parting notes:

  1. this worked because I still had a separate web server running servicing that domain.  If I did /not/ have a separate web server, I could have use the '--standalone' option with certbot to have certbot run a minimal web server itself.  In that case, the container needed to be started with publishing the port 80 so that the standalone server in this container could be reached by the Let's Encrypt backend for validation.
  2. you can use the '-d' option multiple times to specify multiple server names.  E.g. '-d example.com -d www.example.com -d ftp.example.com'.  Let's Encrypt's backend must be able to reach every one of those in order to issue the certificate, but this way you can support multiple subdomains.  Supporting multiple subdomains is quite handy if you want to have several services proxied through nginx.  That way you don't have to open a bunch of ports -- they get routed based on the http headers.  It's also handy from special services like FTP even though those will eventually be on a different port anyway.
  3. as a counterpoint, free DNS providers like 'no-ip.com' do /not/ support subdomains on their free accounts, anyway, so that won't work for you.  But if you have a full DNS provider like GoDaddy or Google Domains, then you can.

Making Diffie-Hellman Parameters

A brief excursion:  I will be using ephemeral Diffie-Hellman key agreement, so I'm going to need some parameters.  On your desktop machine, use openssl:

openssl dhparam -out dhparam.pem 4096

This will take about an hour -- maybe more, so I do recommend doing it on a desktop machine instead of the RPi.

Diffie-Hellman parameters are non-secret data, so if you have a set of them already from other things, feel free to re-use them.

When done, copy those into:

/mnt/datadrive/srv/config/nginx/dhparam.pem

Modding Systemd and Nginx Configs

OK, for the final steps, I modify the configs of our systemd service (actually, the docker-compose.yml for that service) to use the new image, and the nginx config to do TLS.

Stop the services:

sudo systemctl stop docker-compose@myservice

Edit the /etc/docker/compose/myservices/docker-compose.yml:

version: '3'
services:

  #PHP-FPM service (must be FPM for nginx)
  php:
    image: php:fpm-alpine
    container_name: php
    restart: unless-stopped
    tty: true
    #don't need to specify ports here, because nginx will access from services-network
    #ports:
    #  - "9000:9000"
    volumes:
      - /mnt/datadrive/srv/config/php/www.conf:/usr/local/etc/php-fpm.d/www.conf
      - /mnt/datadrive/srv/data/www:/srv/www
    networks:
      - services-network

  #nginx
  www:
    depends_on:
      - php
    image: nginx-certbot
    container_name: www
    restart: unless-stopped
    tty: true
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /mnt/datadrive/srv/config/nginx/default.conf:/etc/nginx/conf.d/default.conf
      - /mnt/datadrive/srv/data/www:/srv/www
      - /mnt/datadrive/srv/config/certbot/etc/letsencrypt:/etc/letsencrypt
      - /mnt/datadrive/srv/config/nginx/dhparam.pem:/etc/ssl/certs/dhparam.pem
    networks:
      - services-network

#Docker Networks
networks:
  services-network:
    driver: bridge

The salient changes are in the 'www' service, the changing of the 'image', the addition of the 443 to 'ports', and the mounting of the '/etc/letsencrypt' directory and 'dhparam.pem' file.

Don't restart yet, I need to modify the nginx config now:

/mnt/datadrive/srv/config/nginx/default.conf

#this overrides the 'default.conf' in the nginx-certbot container

#this is for the example.com domain web serving; I have php enabled here

#this does http-to-https redirect
server {
    listen       80;
    listen  [::]:80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}
#this does the https version
server {
    listen       443 ssl http2;
    listen  [::]:443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/ssl/certs/dhparam.pem;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
    ssl_ecdh_curve secp384r1;
    ssl_session_timeout  10m;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    root /srv/www/vhosts/example.com;
    index  index.html index.htm index.php;

    location / {
        try_files $uri $uri/ /index.php?$query_string; 
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # pass the PHP scripts to FastCGI server listening on (docker network):9000
    #
    location ~ \.php$ {
        try_files $uri = 404; 
        fastcgi_split_path_info ^(.+\.php)(/.+)$; 
        fastcgi_pass php:9000; 
        fastcgi_index index.php; 
        include fastcgi_params; 
        fastcgi_param REQUEST_URI $request_uri; 
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; 
        fastcgi_param PATH_INFO $fastcgi_path_info; 
    }
}

OK! Most of the server is now in a separate section listening on port 443 for TLS connections.  I added a bunch of TLS- related parameters, most importantly specifying the location of the cert chain and private key.  I also added a server on unencrypted transport on port 80, which simply responds with a redirect to the equivalent on 443.

Now I can restart the services, and with luck I should now be serving up HTTPS version of our web site, PHP-enabled, and with certbot running periodically to keep our certificates renewed.  Done with this service!

Another service I need to dockerize is my source code management system.  I use several -- a legacy SVN server, and a more modern Fossil-SCM system.  Fossil appeals to me for several reasons that I will lay out, but proselytizing that is not my intent here.  Rather, this will require me to build a docker image 'from scratch' (well, sort of), so I'll be able to see how to do that.

Next

Making a docker image from scratch for fossil-scm.

Discussions