I’m moving this site to S3/CloudFront. The second time is easier.

  • I created two buckets, one for the Jekyll-generated content and one for the images, videos, and anything else I don’t want in my git repo, so I’ll have to add custom routes
  • I did not enable http on these buckets or change permissions; I want to see if I need to. I just use aws cli to create the buckets and upload files.
  • I have a Let’s Encrypt cert, but I went ahead and requested an Amazon cert during the distribution creation. Unlike the first site/distribution, I was able to select the custom cert radio button immediately. I had to refresh a couple of times after approving the cert, but it was much easier than the first time.
  • While the distribution was still pending, I went back and added my domain to the CNAME field, and it let me edit that
  • I added a 404 error page
  • Creating a second origin - uh, maybe this isn’t what I thought it was. I want to route certain paths to the other bucket, and this doesn’t seem to be that.
  • Ah ok I think I have to create the origin and then go alter the behaviors. The behavior should keep using my first origin for everything until I change them.
  • Created the second origin, and the world didn’t end
  • Creating a behavior - very much like creating the original distribution
  • I need to investigate why compression is off by default. Does S3 compress? For now I’m enabling it because I can’t think of a downside.
  • Creating more behaviors for a total of 4 root directory routes, changing HTTP-TO-HTTPS and automatic compression options
  • It seems that the custom error path wanted a leading slash but the behavior routes don’t?
    • Later found in the docs that the leading slash is optional
  • Still in progress, but I’m getting an access denied message which I think means I need to change permissions in S3
  • Did “Make Public” to just index.html; the root url still denied, but /index.html gave me the page (without style sheet or favicon)
    • Oh, that was probably caching; I went back to /, hit refresh and got the page (sans support files)
  • I could share all the objects, but I want future newly-uploaded ones to be public. I adapted this policy from an earlier post and applied it to the bucket. I don’t seem to need to change the existing objects’ permissions now. The links are working.

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "PublicReadGetObject",
                  "Effect": "Allow",
                  "Principal": "*",
                  "Action": "s3:GetObject",
                  "Resource": "arn:aws:s3:::www.midnightfreddie.com/*"
              }
          ]
      }
    
  • Tested a link from the “-nogit” bucket, and it is access denied because I haven’t changed that bucket’s policy yet. Apply it now.
  • Hmm, applying the policy alone doesn’t seem to have been enough. “Make Public” one of the top-level prefixes.
  • That didn’t help, even for files (nested far) under that prefix
  • Explicitly made some pics/objects public. Still getting access denied. Either I’m not being patient or I’m doing something wrong.
  • I might not be patient enough. These are a second origin and 2nd+ behaviors, and the distribution is still deploying. I’ll wait until it’s fully deployed. I recall on the other distribution I was getting access denied messages when I expected 404s, so it may not even be hitting the right bucket yet.
  • Oh! I’m an idiot. After deployment it still didn’t work, but I realize I just entered the top-level folder name with no *. I need the *. Fixing.
  • Now it’s working! Time to go CNAME my domain name to CloudFront.

The next day…

  • CNAME took effect, and pages are working
  • Still getting 403 AccessDenied responses instead of 404
    • Learned this is because the public doesn’t have list bucket permission
    • Tried to amend the bucket policy and failed miserably (I think that’s the wrong place to put the list bucket permission)
    • Simply used the web GUI to add List for Everyone, and now I get error “NoSuchKey” on my first CloudFront site and a proper 404 page on this site because I already set up the custom error page.
    • I might have alternately been able to map a 403 respons as a 404 response in CloudFront with my 404 page.
  • Still not getting referer logs
    • There is a column for it in the log file, but it’s always “-“
    • Apparently unless otherwise configured, CloudFront drops the headers otherwise it would affect caching (each referer field would be a different cache response)
    • I don’t see a way to log the referer and avoid this behavior
    • I suppose If I really want referer logging I could match *.html and grab the referer there
      • Perhaps I could even make it a temporary behavior and place it behind the (.*) behavior when not needed
      • In theory the extra caching and extra transfer would cost a little extra money, but I’m not sure if it’s a factor in my tiny site