TrackIt – Cloud Consulting & S/W Development
  • Home
  • Solutions
    • M&EMedia & Entertainment
    • HPCHigh Performance Computing
    • DevOpsDevOps, AWS Cloud Consulting
    • StorageData Storage
    • S/W DevSoftware Development; Serverless, CI/CD, UI/UX, API Integration
    • AWS Programs
      • AWS Well-Architected Framework
      • Media2Cloud Video AI/ML
      • Chime SDK
      • DevOps Guru
      • API Gateway
  • Services
  • About Us
  • Resources
    • Blogs
    • White Papers
    • Case Studies
    • Press Releases
  • TrackIt App
    • Sign In

TrackIt Blogs

Read the latest cloud technology news and learn new tips and tricks to get the most out of your cloud.

  • Contact Us

Categories

Automation AWS data rooms Data Storage DevOps DevTools Monitoring Optimization Services TrackIt

Tags

ai ami API Aurora autoscaling aws AWS EC2 AWS S3 chargify Ci Cd Pipeline cloud Cloud Computing Cognito consul cost optimization Data Storage dedicated DevOps docker Dynamodb ec2 ESXi github hashicorp high availability IP json Marketing Media nomad open source packer pfSense Postgres public cloud reduce cost Serverless Solution Architect as a Service terraform trackit tutorial Video VM vmware windows

Sep. 26, 2017

—

Managing your containers with Terraform, Consul and Nomad on AWS EC2

TAGS : ami, autoscaling, aws, consul, ec2, hashicorp, high availability, json, nomad, packer, terraform, tutorial

TL;DR: Pull ssh://git@github.com/trackit/ec2-hashicorp-tutorial.git, it contains Packer files for Consul and Nomad nodes, and a Terraform spec so you can get up and running with a minimal setup as quick as possible.

In this article series I will show you how you can use Hashicorp’s Packer, Terraform, Consul and Nomad on AWS EC2 as a platform to run your product. I have used Packer to build the EC2 AMIs (from a Debian AMI), Terraform to setup the VPC and its resources, Consul to allow service discovery between the instances and Nomad to manage the Docker tasks. This article will describe a minimal setup for this scheme, for the sake of brevity. You can, however, find the source code of our current setup on GitHub https://github.com/trackit/ec2-hashicorp-tutorial. I will assume you are comfortable with using AWS EC2. If you are a quick search should lead you to many good tutorials and documentations.

Configuring Nomad and Consul

In this section I will show you a minimal configuration for both slave and master instances for Nomad and Consul. You can find them at ssh://git@github.com/trackit/ec2-hashicorp-tutorial.git along with all other resources linked to this tutorial. In a later section we’ll see how to build these configurations into AMIs using Hashicorp Packer. The infrastructure we want to achieve needs two kinds of instances: master instances will host Consul servers and Nomad servers, while slave instances will host Consul and Nomad clients and run tasks. These configurations are built for EC2 instances only and may need tweaking to run with some other setup.

Master instances

Master instances are the brain of the infrastructure: the Consul servers there will hold the service discovery data and key/value store while the Nomad servers will direct the Nomad nodes and place the tasks you want to run in the most efficient manner they can. For both Nomad and Consul, Hashicorp recommends that three or five servers be used to provide redundancy and to prevent ties during leader elections (read more on that at https://www.consul.io/docs/guides/leader-election.html and https://www.nomadproject.io/docs/internals/consensus.html). So in this article I will assume three master instances will be running.

Consul servers

First we need our Consul servers to discover each other. Thankfully they have a built-in discovery mechanism for EC2 deployments using instance meta-data. I will configure them as follows.

{
  "server": true,
  "datacenter": "my-little-datacenter",
  "retry_join_ec2": {
    "tag_key": "ConsulAgent",
    "tag_value": "Server"
  },
  "bootstrap_expect": 3
}

The retry_join_ec2 configuration instructs Consul to use EC2 metadata to try and find servers: it will look for instances with tag ConsulAgent=Server, so you will need to set these tags on your instances. Because of the bootstrap_expect configuration servers will only start operating once they’ve constituted a cluster of three or more servers. With this the master instances should be able to discover each other and form a Consul cluster shortly after being brought up. This will immediately be useful to Nomad as the Nomad agent can use a Consul client (Consul servers are Consul clients) on the same node to announce itself and discover other Nomad servers. The datacenter will become useful when your infrastructure will grow to span multiple datacenters, as Consul is built to be region and datacenter aware. For example, when doing service discovery it can be instructed to only return services present in the current datacenter to limit latency.

Nomad servers

datacenter = "my-little-datacenter"
region = "my-region"

data_dir = "/var/lib/nomad/data/"

server {
  enabled = true
  bootstrap_expect = 3
}

client {
  enabled = false
}

consul {
  auto_advertise = true
  server_auto_join = true
}

Most options for Nomad are either explicit or similar to the Consul configuration we saw earlier. datacenter and region inform the server of its location, so that we can specify jobs that run only in some places. A Nomad agent can be a server, a client, or both (there is also a dev mode, which you obviously should not use in production). The data_dir option must be present and must be an absolute path to the directory Nomad will store its data in. It should not be a temporary directory as Nomad can re-join a cluster it lost contact with, and recover using this data. The consul block is especially interesting since its contents instruct Nomad to use the local Consul client to announce itself and discover other servers. (The Consul and Nomad configuration defaults coincide so that the Nomad and Consul agents can talk to each other.) Just like we did for Consul, we use bootstrap_expect to specify that the Nomad cluster should start operating once three servers participate.

Slave instances

The slave instances’ task is to run the Docker containers as directed by the master instances: they are the work force of your cluster. They will discover the master instances and register there.

Consul clients

Consul clients, as opposed to servers, do not hold data. However they are used to reach the masters: instead of performing HTTP requests on the Consul masters, it is preferred to run a Consul client and to perform requests through it. Its configuration is even simpler than the masters’, and still uses same metadata-based discovery system.

{
  "server": false,
  "datacenter": "my-little-datacenter",
  "retry_join_ec2": {
    "tag_key": "ConsulAgent",
    "tag_value": "Server"
  }
}

Nomad nodes

In Nomad terminology, the instances running the tasks are called the “nodes”. It is generally recommended that the servers and the nodes be different instances. Once again the configuration is very similar to the masters’:

datacenter = "my-little-datacenter"
region = "my-region"

server {
	enabled = false
}

client {
	enabled = true
}

consul {
	auto_advertise = true
	client_auto_join = true
}

data_dir = "/var/lib/nomad"

Most importantly, the server and client sections are changed to reflect that the agent must act as a client and not as as server, and server_auto_join becomes client_auto_join.

Running Consul and Nomad

Consul and Nomad both come as statically linked binaries, which means they have no dependencies. Installing them is as simple as moving them to some directory in your path and making sure they are marked as executable on your filesystem. On our example setup, we run them on Debian 9 “Stretch”, which uses SystemD. It follows naturally that we use unit files to have SystemD manage our services. You will find the unit files in the example repository.

Building the AMIs

In order to build AMIs we can deploy on EC2 we will use Packer, another tool by Hashicorp whose aim is to facilitate the building of any sort of images or packages. For example it can build EC2 AMIs, Docker images, VirtualBox images, and many more. Packer uses a JSON file to describe the build process for the image. This JSON file contains two important sections: builders is a list of maps, each describing a platform for your build process. By providing several builders you can reproduce identical provisioning processes with various types of output: you can build EC2 AMIs for production and Vagrant-wrapped VirtualBox images for your developers! As per usual with Hashicorp, most options are quite explicit.

{
  "description": "Master",
  "builders": [
    {
      "type": "amazon-ebs",
      "ssh_username": "admin",
      "region": "us-east-1",
      "secret_key": "{{user `aws_secret_key`}}",
      "access_key": "{{user `aws_access_key`}}",
      "source_ami": "ami-ac5e55d7",
      "instance_type": "m4.large",
      "ami_name": "master-{{timestamp}}"
    }
  ],
  "provisioners": [
    {
      "type": "file",
      "source": "files",
      "destination": "/tmp/"
    },
    {
      "type": "shell",
      "scripts": [
        "scripts/packages.sh",
        "scripts/download-files.sh",
        "scripts/consul.sh",
        "scripts/nomad.sh"
      ],
      "execute_command": "{{.Vars}} sudo --preserve-env bash -x '{{.Path}}'"
    }
  ]
}

Two such configurations are provided in the accompanying repository at /packer/master/packer.json and /packer/slave/packer.json. You can build your AMIs by simply running going into the repository containing the JSON document and running packer build packer.json. Packer will then look for AWS credentials in the standard places (~/.aws/, environment variables, or it can ask for them interactively) to spin up a new instance. This instance will be provisionned as per each entry in provisioners, before being snapshot and turned into an AMI you can deploy!

TIP: I like to save a copy of the build log by doing packer build packer.json | tee build.log as the log files contain useful informations such as the ID of the generated AMI, and also because the logs can get quite long.

Deploying with Terraform

Terraform is a very powerful tool which lets you describe your desired infrastructure as code, and to automate the deployment and update of that infrastructure. Each time you tell Terraform to deploy, it will build a diff of what your infrastructure is and what it ought to be, and perform all necessary operations to arrive to what you described. If you have experience with AWS CloudFormation you will certainly feel at home. One advantage of Terraform over CloudFormation, though, is that it supports multiple providers and data sources. The repository for this tutorial contains a Terraform configuration you can use to deploy a minimal setup with the AMIs I showed you how to build. First you should replace the contents of /terraform/ssh.pub with your SSH public key, so that you can connect to your instances. Then you should place your AMI IDs in /terraform/amis.tf. With that done, you are ready for Terraform to deploy the following and their dependencies:

  • A VPC with a single subnet with public IPs.
  • An autoscaling group of three _t2.nano_ master instances.
  • An autoscaling group of two _m4.large_ slave instances.
  • Security groups which allow SSH access to your instances, and all egress.
  • An instance profile to allow Consul to get your EC2 metadata.
A VPC, a subnet and two autoscaling groups
The example Terraform specification spawns a VPC with a subnet and two autoscaling groups.

Now, you can deploy with these three steps:

  1. Run terraform plan -out tf.plan.
  2. Read what Terraform intends to do to get you the infrastructure you want.
  3. If all is in order, run terraform apply tf.plan.

TIP: While you could deploy by simply running terraform apply, I strongly recommend you do it in two steps: terraform plan -out tf.plan and then terraform apply tf.plan. This will let you inspect Terraform intended course of actions before it affects anything in your infrastructure. Most of the times it will be exactly what you intend, but there are times I wish I had carefully read it before applying it.

Once you have deployed your infrastructure you can freely SSH into your instances: running consul members will let you confirm EC2 metadata-based discovery for Consul worked. nomad server-members will give you the list of Nomad servers and the cluster’s leader, and nomad node-status will give you some infos about your slave instances.

NOTE: If you used the sample code which builds Debian 9 instances, you can connect via SSH using the admin user.

In order to destroy the infrastructure you created you can simply instruct Terraform to plan for the destruction using terraform plan -destroy -out tf.plan, and then applying it with the usual terraform apply tf.plan. In the next part of this tutorial I will show you how you can use Nomad to run tasks on your instances in an efficient manner, while automatically recovering from node failure.

Previous Next

4 Replies to “Managing your containers with Terraform, Consul and Nomad on AWS EC2”

  1. ford transit custom key fob says:
    January 5, 2021 at 10:31 am

    Very nice post. I simply stumbled upon your weblog and wished to mention that I’ve truly enjoyed surfing around your blog posts.

    Reply
  2. préstamo inmediato says:
    January 6, 2021 at 8:18 pm

    Si está buscando un buen préstamo personal o un préstamo rápido, consulte esta página.

    Reply
  3. cheap doxepin says:
    January 10, 2021 at 12:38 am

    I simply wanted to thank you so much once more. I am not sure the things that I would’ve sorted out without the type of solutions revealed by you over that situation. It truly was an absolute hard matter in my opinion, however , seeing the very specialised style you resolved that made me to leap over gladness. I will be happy for this service and then pray you realize what a great job you happen to be providing teaching people today using your web site. Most likely you have never got to know any of us.

    Reply
  4. generic tofranil says:
    January 10, 2021 at 1:39 am

    My husband and i ended up being contented Chris could complete his basic research by way of the ideas he came across from your own blog. It’s not at all simplistic just to always be giving away guides many people could have been making money from. And we consider we’ve got the blog owner to appreciate because of that. All the explanations you’ve made, the easy blog menu, the friendships you aid to create – it’s all astounding, and it’s aiding our son and our family consider that that concept is awesome, and that’s exceptionally fundamental. Many thanks for all!

    Reply

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *

*
*

AWS Advanced Consulting Partner - Cloud Management, Consulting, and Software Development Solutions

Quick links

    • Home
    • Blog
    • About Us
    • Services
    • Trackit App

App

    • Log in to TrackIt
    • How to get started

get in touch

    • Phone: +1 323-230-0042
    • Email: info@trackit.io
  • @TrackIt 2020