Bootstrapping an auto scaling web application within AWS via Kubernetes
Bootstrapping an auto scaling web application within AWS via Kubernetes⌗
Let’s create a state-of-the-art deployment pipeline for cloud native applications. In this guide, I’ll be using Kubernetes on AWS to bootstrap a load-balanced, static-files only web application. This is serious overkill for such an application, however this will showcase several necessities when designing such a system for more sophisticated applications. This guide assumes you are using OSX. You also need to be familiar with both homebrew and AWS.
At the end of this guide, we will have a Kubernetes cluster on which we will automatically deploy our application with each check in. This application will be load balanced (running in 2 containers) and health-checked. Aditionally, different branches will get different endpoints and not affect each other.

About the tools⌗
Kubernetes
A Google-developed container cluster scheduler
Terraform
A Hashicorp-developed infrastructure-as-code tool
Wercker
An online CI service, specifically for containers
Getting to know Terraform⌗
To bootstrap Kubernetes, I will be using Kops. Kops internally uses Terraform to bootstrap a Kubernetes cluster. First, I’ve made sure Terraform is up to date
To make sure my AWS credentials (saved in $HOME/.aws/credentials) were picked up by Terraform, I’ve created an initial, bare-bones Terraform config (which is pretty much taken verbatim from the Terraform Getting Started Guide)
planned
and applied it
That looks promising, and with a quick glance at the AWS console I could confirm that Terraform had indeed boostrapped a t2.micro instance in the us-east-1. I destroyed it quickly afterwards to incur little to no costs via
Alright, Terraform looks good, let’s get to work⌗
Now that I have a basic understanding of Terraform, let’s get to using it. As initially said, we are going to use Kops to bootstrap our cluster, so let’s get it installed via the instructions found at the project’s GitHub repo.
This timed out for me, several times. Running go get
with -u
allowed me to rerun the same query again and again. This happened during the time my ISP was having some troubles, so your mileage will vary.
Afterwards, I built the binary
Also, I made sure to already have a hosted zone setup via the AWS console (mine was already setup since I’ve used Route53 as my domain registrar).
After the compilation was done, I’ve instructed Kops to output Terraform files for the cluster via
This will create the terraform files in out/terraform
, setup the Kubernetes config in ~/.kube/config
and store the state of Kops inside an S3 bucket. This has the benefit that a) other team members (potentially) can modify the cluster and b) the infrastructure itself can be safely stored within a repository
Let’s spawn the cluster
And that is pretty much everything there is to it, I was now able to connect to Kubernetes via kubectl.
Now onto creating the application:
Creating our application⌗
For our demo application, we are going to use a simple (static) web page. Let’s bundle this into a Docker container. First, our site itself:
Not very sophisticated, but it gets the job done. Let’s use golang as our http server (again, this is just for demonstration purposes; If you are really thinking about doing something THAT complicated just to serve a static web page, have a look at this blog post instead. Still complex, but far less convoluted.)
And our build instructions, courtesy of Wercker
This wercker file + command will automatically reload our local dev environment when we change things, so it will come in quite handy once we start developing new features. I can now access the page running on localhost:8080
Also, a wercker build
will trigger a complete build step, including linting and testing (which we do not have yet).
Now, building locally is nice, however we’d like to create a complete pipeline, so that our CI server can also do the builds. Thankfully, with our wercker.yml
file we already did that. All that is now needed is to add our repository into our wercker account and it should automatically trigger after a git push.
Let’s have a look via the REST API (the most important part, the result
that passed)
Building our deployment pipeline⌗
Now that we’ve build our application, we still need a place to store the artifacts. For this, we are going to use the Docker Registry by Docker. I’ve added the deploy step to the wercker.yml
and the two environment variables, USERNAME
and PASSWORD
via the Wercker GUI.
However, at first I was using the internal/docker-push
step, which resulted in a whopping 256MB container. After reading through minimal containers, I changed it to docker-scratch-push
instead, which resulted in a 1MB image instead. Also, I forgot to actually include the static files at first, which I also remedied afterwards.
Now all that’s left is to publish this to our Kubernetes cluster.
Putting everything together⌗
For the last step, we are going to add the deployment to our Kubernetes cluster into the wercker.yml
. This again needs several environment variables which will be set at the Wercker GUI.
Additionally, I’ve added the kube.yml
file which contains service and deployment definitions for Kubernetes.
Now unfortunately Kubernetes does not support parameterization inside its template files yet. This could be remedied by building the template files via following script inside the wercker.yml
This definition will result in all commits to all branches being automatically deployed. Different branches however will get different loadbalancers and therefore different DNS addresses.
And just to make sure, let’s check the actual deployed application:
Testing and health checks⌗
Up until now, we are only hoping that our infrastructure and applications are working. Let’s make sure of that. However, instead of focusing on (classic) infrastructure tests, let’s first make sure that what actually matters is working: The application itself. For this, we can already test our pipeline. Let’s start working on our new feature:
Now we are changing our application so that it responds to a /healthz
endpoint: (this is taken with slight adaptations from here)
This application now serves (as before) our index.html from /
and additionally exposes a healthz
endpoint that responds with 200 OK
for 10 seconds and 500 error
after that. Basically, we’ve introduced a bug in our endpoint which does not even surface to a user. Remember that time when your backend silently swallowed every 100th request? Good times…
Now we also need to consume the healthz
endpoint, which is done in our deployment spec.
With those changes, we can push our new branch into github and check the (new!) endpoint that Kubernetes created.
For a user everything looks fine, however when we check the actual pod definitions we can see that they die after a short time
Let’s fix that:
Uh-oh, this is not related to our build file but to our infrastructure. This seems to be caused by https://github.com/kubernetes/kubernetes/issues/26202 and seems to suggest that changing selectors (what we are using for the load balancer to know which containers to switch in) is not a good idea but instead creating new load balancers. For our use case, let’s simply remove the commit label since it is not needed anyways (the commit is already referenced as the image itself)
After that is fixed, let’s recheck our deployment
Much better. Let’s finish our work with a merge to master and recheck our deployment one last time.
Cleanup⌗
Oh well, looks like Terraform (or rather, AWS) did not update its state soon enough. No issue though, you can simply rerun the command.
Voila. However, Kubernetes reccomends to also use Kops to delete the cluster to make sure that any potential ELBs or volumes resulted during the usage of Kubernetes are cleaned up as well.
Links⌗
ToDos⌗
Now granted this is not a comprehensive guide.
- It is still missing any sort of notification in case something goes wrong
- There is no automatic cleanup of deployments
- There is no automatic rollback in case of errors
- And, above all: This is extremely complicated just to host a simple web page. Again, for only static files, you are much better of using something like GitHub pages or even S3.
Closing remarks⌗
Would I reccomend using Kubernetes? ABSOLUTELY.
Not only is Kubernetes extremely sophisticated, it is also advancing at an incredible speed. For reference, I’ve tried it out around a year ago with V0.18, and it did not yet have Deployments, Pets, Batch Jobs or ConfigMaps, all of which are incredibly helpful.
Having said that, I am not sure if I’d necessarily reccomend Wercker. Granted, it works nicely - when it works. I’ve ran into several panics when trying to run the wercker cli locally, NO output whatsoever on the web GUI if the working directory does not exist, and the documentation is severely outdated. It is still in beta, yes, however if this is an indication of things to come that I am not sure if I would like to bet on it for something as critical as a CI server.
TL;DR⌗
To bootstrap a kubernetes cluster:
To push a new version of our code or infrastructure: