Let's Deploy! (Part 2)

If you are looking to run your own webserver, like Nginx, and you need a simple declarative deployment solution, then this post is for you. I’ve been running this solution for about a year and a half on two of my api web services, and I haven’t had to manually intervene for any SSL cert-related issues. Best of all, the LetsEncrypt renewal process is in a separate container, which links to my Nginx container, and everything can be composed within a single Docker Compose file. This keeps things modular and manageable through a single .yml file.

So, how do we set this up? Everything can be deployed from a single .yml file used by Docker Compose:

    image: lukeswart/letsencrypt
    entrypoint: ""
      - /etc/letsencrypt:/etc/letsencrypt
      - /var/lib/letsencrypt:/var/lib/letsencrypt
      - "80"
      - "443"
      - TERM=xterm

    image: lukeswart/nginx-letsencrypt
      - ./nginx-acme-challenge.conf:/etc/nginx/nginx.conf
      - ./nginx.conf:/etc/nginx/nginx-secure.conf
      - my-app-container
      - my-app-container
      - letsencrypt
      - 80:80
      - 443:443
    restart: always

  # Our cron container that runs our letsencrypt container, and reloads our nginx container
    image: docker
      - /var/run/docker.sock:/var/run/docker.sock
      - ./letsencrypt-nginx-cron:/etc/cron.d/letsencrypt-nginx-cron
command: chmod a+x /etc/cron.d/letsencrypt-nginx-cron && touch /var/log/crond.log && crontab /etc/cron.d/letsencrypt-nginx-cron && crond -l 0 -L /var/log/crond.log && echo 'starting nginx-cron' && tail -f /var/log/crond.log"

Understanding our linked containers: Link to heading

So in this example, we are starting the following three containers:

  • letsencrypt: interfaces with the LetsEncrypt CLI to handle SSL generation/renewals
  • letsencrypt-cron: a lightweight container that makes calls to the letsencrypt container using a cron job to handle SSL cert renewals
  • nginx: our Nginx webserver, which links to the location of the SSL certs (saved under /etc/letsencrypt), and also links to our app container. Our app container is not shown here because it can be anything, but it is named my-app-container. The Docker image is lukeswart/nginx-letsencrypt, which is built from the official nginx image, but with an added script to listen for SSL cert generation before launching (See the Nginx config files section below for details)

Configuring our containers using environment variables: Link to heading

In the compose file above, there are four environment variables that we’ll need to understand and configure:

  • LETSENCRYPT_DOMAINS: Domain names to add onto the cert (ie example.com or example1.com example2.com)
  • LETSENCRYPT_DEBUG_MODE: Whether you want to generate a “dummy” SSL cert using LetsEncrypt staging server. This should always be false in production environments.
  • LETSENCRYPT_EMAIL: Email which your LetsEncrypt cert will be registered.
  • MY_DOMAIN_NAME: This helps Nginx find the proper SSL cert. It should be the first domain name listed in your LETSENCRYPT_DOMAINS variables (ie example.com or example1.com)

Note that there are several ways to manage your environment variables and injecting them into Docker Compose. My StackOverflow answer here provides an overview of these approaches. I also explain my setup in detail in the first part of this write up, which should provide enough detail about configuring environment variables with Docker Compose. I’ll reference that post later in this article as well when describing how to (optionally) include environment variables in your Nginx config.

Cron file for handling our SSL cert renewals Link to heading

letsencrypt-nginx-cron: This is the cron file that check for LetsEncrypt cert renewal. I like to check it once per day, at midnight, as follows:

00 23 * * * docker restart myproject_letsencrypt_1 && echo 'running crontab' && docker exec myproject_nginx_1 nginx -s reload

# An empty line is required at the end of this file for a valid cron file.

Note that myproject_letsencrypt_1 and myproject_nginx_1 should be the names of your containers, which you can view by running docker ps -a. At some point in the near future, I hope to use configure the names of these containers using environment variables as well!

Nginx config files: Link to heading

We have two nginx config files, which are mounted as volumes on our nginx container. Note that both of these files contain environment variables that need to be injected into the .conf files. I describe this process in the first part of this write up. In short, I have a bash script that performs the following:

# variables defined in .env will be exported into this script's environment:
set -a
source .env

# To avoid substituting nginx variables, which also use the shell syntax,
# we'll specify only the variables that will be used in our nginx config:
# Now lets populate our nginx config templates to get an actual nginx config
# (which will be loaded into our nginx container):
envsubst "$NGINX_VARS" < nginx.conf > nginx-envsubst.conf
envsubst "$NGINX_VARS" < nginx-acme-challenge.conf > nginx-acme-challenge-envsubst.conf

After doing this, be sure to link the new files, nginx-envsubst.conf and nginx-acme-challenge-envsubst.conf, to the nginx container. (the nginx.conf and nginx-acme-challenge.conf files will still exist, but they are only templates and they won’t have their environment variables substituted.

nginx-acme-challenge.conf: This config file serves as our nginx config while the letsencrypt CLI runs the challenge that is needed to issue the SSL cert. As soon as the challenge passes and the SSL cert is generated, we load the nginx.conf file in place of this one. Here is the nginx-acme-challenge.conf:

events { worker_connections 1024; }
http {
        server {
                listen 80;
                server_name ${DOMAINS};

                location /.well-known/acme-challenge {
                        proxy_pass http://letsencrypt:80;
                        proxy_set_header Host            $host;
                        proxy_set_header X-Forwarded-For $remote_addr;
                        proxy_set_header X-Forwarded-Proto https;

                location /static/ {
                    root /api;
                    try_files $uri $uri/;

nginx.conf: This is our config file that will contain our virtual hosts, and should be configured the same as any other. Here is an example file, but note that the only pieces that are relevant to LetsEncrypt are within the server block below:

worker_processes 1;
error_log stderr notice;

events {
    worker_connections 1024;

http {

    include /etc/nginx/mime.types;
    charset utf-8;

    proxy_set_header Host $host;

    gzip_static on;
    gzip on;
    gzip_min_length  1100;
    gzip_buffers  4 32k;
    gzip_types    text/plain application/x-javascript text/xml text/css;
    gzip_vary on;

    server {
        listen 443;
        server_name ${DOMAINS};

        ssl on;
        ssl_certificate /etc/letsencrypt/live/${MY_DOMAIN_NAME}/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/${MY_DOMAIN_NAME}/privkey.pem;
        # These are just SSL optimizations, unrelated to LetsEncrypt:
        ssl_session_timeout 5m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers on;
        ssl_stapling on;
        ssl_stapling_verify on;

        ssl_session_cache shared:SSL:10m;
        ssl_dhparam /etc/ssl/private/dhparams.pem;

        # Used for our LetsEncreypt renewals:
        location /.well-known/acme-challenge {
            proxy_pass http://letsencrypt:443;
            proxy_set_header Host            $host;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_set_header X-Forwarded-Proto https;

        # This is an example of a configured app:
        location /static/ {
            root /api;
            try_files $uri $uri/;
        location / {
            proxy_pass http://my-app-container:8010;

Conclusion Link to heading

I hope this is useful to those interested in running IAAS-independent apps with their own webservers. It’s definitely a great way to save on resources and money for smaller scale applications. I run all of these containers on a medium instance that serves thousands of requests per day and I have not had a problem. With this declarative setup, you can have full control over your deployment without manual interventions, leaving more time to focus on your app!