How to Automate the Deployment of a Password Protected Static Website on S3 with Jenkins, Docker and Kubernetes
Author
TrackIt
Date Published
If you need to host a website that only contains static content, S3 might be an excellent choice for its simplicity: you upload your files on your bucket and don't need to worry about anything else.
However, today's companies are looking for automation, and who says automation says Jenkins (or any other CI tool, but we are going to use Jenkins in this example).
In the first part of this article, we are going to discuss how to automate the build of a Ruby website (it could work for any language) and upload it on S3 with Docker and Jenkins. In the second part, we are going to discuss how to password protect it with Nginx and Kubernetes.
Automated build with Jenkins and Docker
Create Dockerfile
In this part, I'm assuming that your project is hosted on a git repository and that this repository is accessible by Jenkins (if not, look for Jenkins git plugin).
In order to keep your Jenkins installation clean and avoid having to install a lot of packages/dependancies, we are going to use a temporary Docker container to build your website.
Our website is written is Ruby, so we are going to use the following Dockerfile. Feel free to adapt it if you're using a different language.
1FROM ruby:2.2.72RUN mkdir -p /app3ADD . /app4WORKDIR /app5RUN apt-get update6RUN gem install --no-ri --no-rdoc bundler7RUN bundle install
/app is where our project will be built.
This Dockerfile needs to be placed at the root of your project dir.
Note that we are not building it yet, the build command will be executed directly by Jenkins.
Let's try to build it locally:
1docker build -t my-static-website .2mkdir website && docker run -v `pwd`/website:/app/build my-static-website bundle exec middleman build --clean
If everything went correctly, your build should be in your website/ directory on your machine. If yes, we can proceed to the next step.
Creating S3 bucket
We need to create our S3 bucket, give it the proper permissions and create an IAM user so Jenkins is allowed to upload to it.
Let's call it static-website-test and put it in the region you prefer. Leave everything as default and confirm creation.
In your bucket settings, under the Properties tab, enable static website hosting. Under Permissions, paste the following policy so the website will be publicly accessible:
1{2 "Version": "2012-10-17",3 "Statement": [4 {5 "Sid": "PublicReadGetObject",6 "Effect": "Allow",7 "Principal": "*",8 "Action": "s3:GetObject",9 "Resource": "arn:aws:s3:::static-website-test/*"10 }11 ]12}
Creating IAM user for Jenkins
Access the IAM section of your AWS console and create a new user and give it the following permissions (so the user will only be allowed to access the static website bucket):
1{2 "Version": "2012-10-17",3 "Statement": [4 {5 "Effect": "Allow",6 "Action": [7 "s3:ListBucket"8 ],9 "Resource": [10 "arn:aws:s3:::static-website-test"11 ]12 },13 {14 "Effect": "Allow",15 "Action": [16 "s3:PutObject",17 "s3:GetObject",18 "s3:DeleteObject"19 ],20 "Resource": [21 "arn:aws:s3:::static-website-test/*"22 ]23 }24 ]25}
Create Jenkinsfile
Now that everything is ready on AWS side, we need to write a Jenkinsfile. A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.
For this part, I assume that Docker is configured with Jenkins and AWS plugins are installed.
Our project is going to have 2 steps: build of the website, and upload to S3.
1#!/usr/bin/env groovy23def appName = "static-website-test/*"45node {6 stage('Build website') {7 docker.withServer(env.DOCKER_BUILD_HOST) {8 deleteDir()9 checkout scm10 sh "git rev-parse --short HEAD > .git/commit-id"11 commitId = readFile('.git/commit-id').trim()12 imageId = "${appName}:${commitId}".trim()13 appEnv = docker.image(imageId)14 sh "docker build --tag ${imageId} --label BUILD_URL=${env.BUILD_URL} ."15 sh "mkdir website && docker run -v `pwd`/website:/app/build ${imageId} bundle exec middleman build --clean"16 }17 }18 stage('Upload site to S3') {19 withAWS(credentials: 'AWS_STATIC_S3_SITES') {20 s3Upload(file:'website', bucket:'static-website-test', path: '')21 }22 }23}
Commit to repository
Push your Dockerfile and Jenkinsfile to your repository. Depending on your Jenkins configuration the build may or may not start automatically. If not, go on your project and click on Build now.
Make sure that all steps have finished successfully, visit your website url (written on your S3 bucket), and you should be able to see your website! Congratulations!
Protect the website with an authentication with Kubernetes and Nginx
Now let's imagine that you need to password-protect your website (to host an internal documentation for example). To do that, we will need to use a reverse proxy in front of the website, for the simple reason that S3 doesn't allow that kind of customization (it's pages hosting at its simplest).
Detailing the installation of a simple nginx reverse proxy would be boring, let's imagine that you're hosting your own Kubernetes cluster in your AWS vpc. Kubernetes is a powerful tool, it can manage your AWS for you. Quick example: if you deploy multiple websites on your cluster, Kubernetes is going to manage ports and Elastic Load Balancer for you. So in only one command, you can have a running website with an ELB created and associated.
Sounds amazing? Look how simple it is:
Prepare nginx configuration
Connect to your Kubernetes cluster.
Create your password db:
1echo -n 'user:' >> htpasswd2openssl passwd -apr1 >> htpasswd
Configuration that will be used by nginx container:
my-static-website.conf
1upstream awses {2 server static-website-test.s3-website-us-east-1.amazonaws.com fail_timeout=0;3}45server {6 listen 80;7 server_name _;89 keepalive_timeout 120;10 access_log /var/log/nginx/static-website.access.log;1112 location / {13 auth_basic "Restricted Content";14 auth_basic_user_file /etc/nginx/.htpasswd;15 proxy_pass http://awses;16 proxy_set_header Host my-static-website.s3-website-us-east-1.amazonaws.com;17 proxy_set_header Authorization "";18 proxy_http_version 1.1;19 proxy_set_header Connection "";20 proxy_hide_header x-amz-id-2;21 proxy_hide_header x-amz-request-id;22 proxy_hide_header x-amz-meta-server-side-encryption;23 proxy_hide_header x-amz-server-side-encryption;24 proxy_hide_header Set-Cookie;25 proxy_ignore_headers Set-Cookie;26 proxy_intercept_errors on;27 add_header Cache-Control max-age=31536000;28 }29}
In order to pass them to Kubernetes, we are going to encode them in base64 to store them as secret.
1cat htpasswd | base642cat my-static-website.conf | base64
Copy and paste the outputs in your secret file:
1apiVersion: v12kind: Secret3metadata:4 name: nginx-auth-config5 namespace: default6type: Opaque7data:8 .htpasswd: BASE64_OUTPUT9 my-static-website.conf: BASE64_OUTPUT
Now let's write our deployment file (nginx-auth-deployment.yml):
1apiVersion: v12kind: Deployment3apiVersion: extensions/v1beta14metadata:5 name: nginx-auth6 labels:7 name: nginx-auth8spec:9 replicas: 110 template:11 metadata:12 labels:13 name: nginx-auth14 spec:15 imagePullSecrets:16 - name: docker-registry17 volumes:18 - name: "configs"19 secret:20 secretName: nginx-auth-config21 containers:22 - name: nginx-auth23 image: nginx:latest24 imagePullPolicy: Always25 command:26 - "/bin/sh"27 - "-c"28 - "sleep 5 && nginx -g \"daemon off;\""29 volumeMounts:30 - name: "configs"31 mountPath: "/etc/nginx/conf.d"32 readOnly: true33 ports:34 - containerPort: 8035 name: http
Same for the service (nginx-auth-svc.yml)
1apiVersion: v12kind: Service3metadata:4 labels:5 name: nginx-proxy-svc6 name: nginx-proxy-svc7 annotations:8 service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http9spec:10 ports:11 - port: 8012 targetPort: 8013 name: http14 selector:15 name: nginx-auth16 type: LoadBalancer
It's this last file that interests us. While the others were linked to nginx and deployment configuration, this one links your deployment to AWS. We can see how easy it is to create an elastic load balancer. This file is highly customizable and you can describe all kinds of AWS resources you could possibly need.
Now create your deployment and service:
1kubectl apply -f nginx-auth-secret.yml2kubectl create -f nginx-auth-deployment.yml3kubectl create -f nginx-auth-svc.yml
You can follow the progess with the two following commands:
1kubectl describe deployment nginx-auth2kubectl describe service nginx-proxy-svc
Once everything is completed, the description of your service should give you the url of your ELB. Visit this url and congratulations, your website is available and password protected!
About TrackIt
https://www.youtube.com/watch?v=QBiJ156cA2I
TrackIt is an international AWS cloud consulting, systems integration, and software development firm headquartered in Marina del Rey, CA.
We have built our reputation on helping media companies architect and implement cost-effective, reliable, and scalable Media & Entertainment workflows in the cloud. These include streaming and on-demand video solutions, media asset management, and archiving, incorporating the latest AI technology to build bespoke media solutions tailored to customer requirements.
Cloud-native software development is at the foundation of what we do. We specialize in Application Modernization, Containerization, Infrastructure as Code and event-driven serverless architectures by leveraging the latest AWS services. Along with our Managed Services offerings which provide 24/7 cloud infrastructure maintenance and support, we are able to provide complete solutions for the media industry.
