I can think of two sites of mine that could really use SSL/TLS. It’s time to check out https://letsencrypt.org/.

Oh, and AWS has free certificates now, at least for AWS services.

Jan 2 notes

Following Certbot instructions (sort of)

  • Using Ubuntu on Win10
  • Tried curl -O https://dl.eff.org/certbot, but that doesn’t exist
  • Begrudingly: curl -O https://dl.eff.org/certbot-auto, chmod 755 certbot-auto, ./certbot-auto
    • It runs apt, needs my sudo password
    • It needs these over what I have installed, I let it install them
      • augeas-lenses
      • dialog
      • libaugeas0
      • libffi-dev
      • libssl-dev
      • python-virtualenv
      • zlib1g-dev
  • Certbot wants to try to do everything itself, and that’s not how I work. Will use the manual / certonly plugin/command
  • ./certbot-auto certonly
  • Getting mad at this script; it wants root for everything
  • The previous command seems to need access to a web root. Nope! We’ve just met, and I’m not that kind of web admin.
  • ./certbot-auto certonly --manual
  • The damn thing still wants sudo…why?
  • Started fuming and cursing, then realized it was showing me two options, and option #1 is what I wanted:

      Make sure your web server displays the following content at
      http://example.tld/.well-known/acme-challenge/q_POp<redacted gobbledegook>fB5M before continuing:
      q_P<redacted gobbledegook>Adc
      If you don't have HTTP server configured, you can run the following
      command on the target server (as root):
      mkdir -p /tmp/certbot/public_html/.well-known/acme-challenge
      cd /tmp/certbot/public_html
      printf "%s" q_P<redacted gobbledegook>Adc > .well-known/acme-challenge/q_POp<redacted gobbledegook>fB5M
      # run only once per server:
      $(command -v python2 || command -v python2.7 || command -v python2.6) -c \
      "import BaseHTTPServer, SimpleHTTPServer; \
      s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \
  • Manually putting this on the IIS back end server didn’t work, likely because of the dot-name folder. Meh, I’ll do it on the front end
    • Actually, now that I see how this works, I’ll probably end up routing .well-known for all encrypted sites to a special-purpose back end that handles the cert acquisition
  • In fact, doing this now and routing to my Win10/Ubuntu box
    • nginx conf snippet for my front end / load balancer

        location /.well-known/acme-challenge/ {
    • Opened port 80 on my local machine

  • (Oh FFS this script requires sudo to run --help >:( ))
  • For some reason this still prompted me for the domain names

      email=jim@jimnelson.us \
      domains=example.tld \
      ./certbot-auto \
          certonly \
          --standalone \
          --standalone-supported-challenges http-01 \
          --email $email
          --domains $domains
  • But aside from that it worked:

       - Congratulations! Your certificate and chain have been saved at
         /etc/letsencrypt/live/example.tld/fullchain.pem. Your cert
         will expire on 2017-04-03. To obtain a new or tweaked version of
         this certificate in the future, simply run certbot-auto again. To
         non-interactively renew *all* of your certificates, run
         "certbot-auto renew"
       - If you like Certbot, please consider supporting our work by:
         Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
         Donating to EFF:                    https://eff.org/donate-le
  • It populated /etc/letsencrypt with the cert, apparent credentials and even a renewal conf file
    • I think I’ll keep a letsencrypt Docker container to run this from and then copy over just the certs so the certbot script has no access to the web servers and the load balancer only has the current certs and not the Let’s Encrypt credentials
  • Success! Here is a slightly-redacted nginx site conf. I have two server blocks because I’m redirecting port 80 to 443. If I want to serve on both I could just at the listen-80 lines to the second block and eliminate the first.

      server {
              listen 80;
              listen [::]:80;
              server_name example.tld;
              location /.well-known/acme-challenge/ {
              location / {
                      return       301 https://example.tld$request_uri;
      server {
              listen 443 ssl;
              listen [::]:443 ssl;
              server_name example.tld;
              ssl_certificate /etc/letsencrypt/live/example.tld/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.tld/privkey.pem;
              location /.well-known/acme-challenge/ {
              location / {
                      # Retain host header
                      proxy_set_header Host "example.tld";
                      # Add origin IP to headers
                      proxy_set_header X-Real-IP $remote_addr;
  • While creating a Dockerfile / Docker image for certbot
    • I used centos:latest over ubuntu:latest because curl is included in the former
    • But the image build kept failing because yum as called by certbot-auto was asking for input
    • Figured out that I can echo assumeyes=1 >> /etc/yum.conf instead of editing the certbot script to make yum work
    • Was trying to figure a way to run certbot to get it to install dependencies for the image, then realized duh, just look at the packages it wants and install them myself
    • Oh, while looking at the script itself I discovered the --non-interactive and --os-packages-only options which fix the above struck-out options
    • Now I need to figure out how to preinstall the Python virtual environment.
    • Looks like I can set XDG_DATA_HOME to pick where data is saved and then make that dir a volume
  • Dockerfile. I have a persistent data container named “letsencrypt” and run the container like ``docker run -ti –rm –volumes-from letsencrypt -p 8121:80 certbot `. And I edited my nginx conf to use this host's 8121 as the proxy_pass target. And I tagged the image "certbot".

      FROM centos:latest
      MAINTAINER jim@midnightfreddie.com
      ENV XDG_DATA_HOME /opt/xdg-data-home
      RUN mkdir -p $XDG_DATA_HOME \
              && mkdir -p /etc/letsencrypt \
              && cd /root \
              && curl -O https://dl.eff.org/certbot-auto \
              && chmod a+x certbot-auto \
              && /root/certbot-auto --non-interactive --os-packages-only
      # To save the auth and cert data
      VOLUME /etc/letsencrypt
      # To share the Python virtual environment to prevent from rebuilding each run if --volumes-from used
      VOLUME /opt/xdg-data-home
      EXPOSE 80
      ENTRYPOINT ["/root/certbot-auto"]
  • LOL, passing multiple domains to one run of the command creates one cert with all the domains in it. I didn’t intend that, although I guess it could work.
  • I’m getting messages about --standalone-supported-challenges being deprecated and needing to use --preferred-challenges instead. I might fix that at a later time.
  • Because I tried cramming all the domains together and then did them separately, I have for my first one an example.tld and example.tld-0001 folder for one domain. I’ll sort that out later, too.
  • Hmmm, at least one of my certs is all wrong
    • Others seem fine, and I’m not yet sure if it’s the cert or my configuration
    • Oh, this may be due to my hitting my sites from the “wrong” side of the router
      • Ah yes! I think the working sites have IPv6? Not sure on that.
      • Yes, the working sites have IPv6 addresses, so I’m not hitting the router but my IPv6 gateway instead
      • For IPv4-only websites I’m hitting the LAN side of my router on port 443 which means I’m accessing the router’s admin panel instead of the DMZ host.
      • External sites can reach the “broken” sites
      • So I either need IPv6 addresses for everything and/or move my router’s web console port
      • Changing the router’s console port didn’t help
      • The question isn’t why doen’t it work; the question is why did port 80 work?
      • Resolved by assigning the public IP as a second IP to the load balancer and routing that IP intenrally to it, so now I’m not hitting the LAN port when trying to thit the public IP.
  • I’m not putting the naked domains (e.g. midnightfreddie.com) in the port 443 confs
    • This results in a cert error if I browse to it, but I’ve never used https with the naked domains, and the http redirects to https://www…
    • I could add the naked domains to the certs, but I’m intending not to use naked domains anymore because they aren’t compatible with CNAME redirection in most cases

Feb 28 Renewal Attempt

I have a little over a month until my certs expire. I don’t yet have automation to renew them.

  • Was going to try to make an Ansible playbook to manage this, but after reviewing the above I realized I just needed to run the “renew” command and then copy /etc/letsencrypt/live and maybe archive to the target server.
  • But first I removed the configs for a site I moved to AWS (and am using AWS certs for) and my original goof with all the domain names on one cert
  • docker run -ti --rm --volumes-from letsencrypt certbot renew --standalone --standalone-supported-challenges http-01 --agree-tos
  • I apparently missed deleting two renewal config files because it looked for and errored out on the two domains I tried to delete
  • The rest it checked, but it says they’re not ready for renewal. I guess I have to wait a few more days.
  • But it looks like I can run the above command every couple of weeks, then if the certs are updated copy them to the target server and restart the web server.
  • Rather than have one script manage this, I think I might make an S3 bucket, and the letsencrypt side will push certs if renewed, and the web server will check the bucket and pull any updated certs.