With the ever-increasing demand for agile and efficient application deployment, Kubernetes has emerged as the de facto standard for container orchestration, revolutionizing the way companies deploy and manage their applications. By facilitating the streamlining of complex containerized environments, Kubernetes has garnered widespread adoption among enterprises of all sizes.

Amazon Web Services (AWS) has been at the forefront of providing powerful tools and services that help organizations harness the full potential of modern technology and meet the challenges of today’s dynamic business landscape. Among these offerings is Amazon Elastic Kubernetes Service (EKS), a fully managed Kubernetes service that empowers companies to leverage the benefits of containerization. 

By streamlining the process of deploying, managing, and scaling containerized applications, EKS provides a robust infrastructure foundation for organizations seeking to embrace the agility and scalability of Kubernetes. Amazon EKS facilitates a smoother integration of Kubernetes into the AWS ecosystem, enabling businesses to focus on their core applications and deliver value to their customers efficiently.

Accelerate kubernetes

EKS Blueprints: Streamlining EKS Adoption and Enhancing Deployment Consistency

In an ongoing effort to simplify and expedite the adoption of Amazon EKS, AWS has introduced an open-source project known as EKS Blueprints. This innovative collection of Infrastructure as Code (IaC) modules is specifically designed to empower users in seamlessly deploying consistent EKS clusters across multiple accounts and regions, bolstering operational efficiency and reducing complexity.

With EKS Blueprints, users gain access to a comprehensive suite of pre-configured templates that enable effortless bootstrapping of EKS clusters. These templates encompass a wide array of functionalities, including the integration of Amazon EKS add-ons and various popular open-source add-ons such as Prometheus, Karpenter, Nginx, Traefik, AWS Load Balancer Controller, Fluent Bit, Keda, Argo CD, and more. Leveraging these Blueprints allows users to expedite their cluster deployments while ensuring compatibility with a diverse range of tools and add-ons.

EKS Blueprints also empowers users to implement robust security controls tailored to their specific requirements. This capability is particularly valuable in environments where multiple teams operate workloads within the same cluster. By providing the means to enforce relevant security measures, EKS Blueprints ensure the integrity and isolation of workloads to foster a secure and efficient operational environment.

Below are the advantages of EKS Blueprints, including how the adoption process is streamlined and the enhancement of the deployment consistency for EKS clusters. A practical example is also provided to illustrate the seamless deployment of Argo CD using the EKS Blueprints. 

Benefits of EKS Blueprints

EKS Blueprints offer a host of benefits that significantly streamline the deployment of Kubernetes applications.

  1. Simplified Setup: EKS Blueprints provides customizable templates for popular Kubernetes applications, enabling users to spin up and configure complex application stacks in minutes rather than hours. By eliminating the need for manual creation and fine-tuning of each component, EKS Blueprints saves valuable time and minimizes the likelihood of misconfigurations. 
  2. Consistent Best Practices: Adhering to AWS best practices for Kubernetes application deployment, EKS Blueprints assist users in ensuring the resulting clusters are secure, resilient, and well-optimized. Leveraging these blueprints allows developers to focus on the core logic for applications rather than focusing on infrastructure setup.
  3. Reproducibility: EKS Blueprints facilitate a consistent deployment experience across multiple environments, making it effortless to replicate clusters for various purposes such as development, staging, and production. This promotes reproducibility and eliminates potential inconsistencies that may arise when deploying to different environments. 
  4. Teaming Aspect and Customization: EKS Blueprints offer a valuable teaming aspect by providing a shared service for customer development teams. This allows different teams to collaborate effectively while ensuring that each team’s custom requirements are met. With EKS Blueprints, development teams can leverage the flexibility to tailor their Kubernetes environments to specific needs, all while adhering to customer standards and security policies. 
  5. Auditability and Transparency with GitOps: EKS Blueprints integrate GitOps principles into the deployment and management of Kubernetes clusters. This approach exposes the state of the environment, optimizes deployment processes, and supports version-controlled infrastructure. Customers benefit from increased transparency and auditability in their Kubernetes workflows. The ability to track changes, review historical deployments, and ensure compliance with security policies enhances governance and enables effective monitoring of the cluster’s configuration and updates.

Example: Deploying ArgoCD with EKS Blueprints

In the following example, we will demonstrate the deployment of ArgoCD, a widely used GitOps tool, utilizing the capabilities of EKS Blueprints. 

provider “aws” { region = local.region}
provider “kubernetes” { host                   = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) token                  = data.aws_eks_cluster_auth.this.token}
provider “helm” { kubernetes {   host                   = module.eks.cluster_endpoint   cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)   token                  = data.aws_eks_cluster_auth.this.token }}
provider “bcrypt” {}
data “aws_eks_cluster_auth” “this” { name = module.eks.cluster_name}
data “aws_availability_zones” “available” {}
locals { name   = basename(path.cwd) region = “us-west-2”
 cluster_version = “1.24”
 vpc_cidr = “10.0.0.0/16” azs      = slice(data.aws_availability_zones.available.names, 0, 3)
 tags = {   Blueprint  = local.name   GithubRepo = “github.com/aws-ia/terraform-aws-eks-blueprints” }}

Providers

In the example provided, several providers are utilized to establish the necessary connections for the deployment of ArgoCD with EKS Blueprints. The “aws” provider is configured to interact with AWS services, specifying the desired region for the deployment. The “kubernetes” provider establishes a connection with the EKS cluster, utilizing the cluster endpoint and certificate authority data. Additionally, the “helm” provider connects to the EKS cluster, enabling the deployment and management of Helm charts.

  • aws: Specifies the AWS provider and the desired region.
  • kubernetes: Specifies the Kubernetes provider and connects to the EKS cluster.
  • helm: Specifies the Helm provider and connects to the EKS cluster.

Data Sources

To gather essential information for the deployment, data sources are leveraged in the example. The “aws_eks_cluster_auth” data source retrieves the authentication token required to establish a connection with the EKS cluster. This token is obtained by specifying the cluster name from the “eks” module. The “aws_availability_zones” data source fetches the availability zones available in the chosen region, which are then stored for further use in the code.

  • aws_eks_cluster_auth: Retrieves the authentication token required to connect to the EKS cluster.
  • aws_availability_zones: Fetches the availability zones in the chosen region.

Locals

The “locals” block in the provided code snippet defines various local variables used throughout the deployment process. The “name” local variable represents the basename of the current working directory, serving as an identifier for the EKS cluster. The “region” local variable specifies the desired region, which in this case is set to “us-west-2.” The “cluster_version” local variable determines the desired version of the EKS cluster. The “vpc_cidr” variable defines the CIDR block for the Virtual Private Cloud (VPC), and the “azs” variable retrieves the names of the available availability zones in the region. Finally, the “tags” variable contains custom tags that can be associated with the resources created during the deployment, including the name of the blueprint and the GitHub repository used.

  • name and region: Defines the name and region of the EKS cluster.
  • cluster_version: Specifies the desired version of the EKS cluster.
  • vpc_cidr and azs: Defines the CIDR block for the VPC and fetches the availability zones
  • tags: Contains custom tags for resources.
########################################################################## Supporting Resources#########################################################################
module “vpc” { source  = “terraform-aws-modules/vpc/aws” version = “~> 4.0”
 name = local.name cidr = local.vpc_cidr
 azs             = local.azs private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
 enable_nat_gateway = true single_nat_gateway = true
 public_subnet_tags = {   “kubernetes.io/role/elb” = 1 }
 private_subnet_tags = {   “kubernetes.io/role/internal-elb” = 1 }
 tags = local.tags}

Supporting Resources

The “module vpc” in the provided code sets up a VPC (Virtual Private Cloud) using the “terraform-aws-modules/vpc/aws” module. It creates public and private subnets across multiple availability zones (AZs), enables NAT gateway for outbound internet access, and applies custom tags to subnets for role identification. This VPC configuration ensures a secure and scalable network infrastructure for the EKS cluster and associated resources.

module “vpc”:

  • Creates a VPC (Virtual Private Cloud) using the terraform-aws-modules/vpc/aws module.
  • Sets up public and private subnets across availability zones.
  • Enables NAT gateway for outbound internet access.
########################################################################## Cluster#########################################################################
#tfsec:ignore:aws-eks-enable-control-plane-loggingmodule “eks” { source  = “terraform-aws-modules/eks/aws” version = “~> 19.12”
 cluster_name                   = local.name cluster_version                = local.cluster_version cluster_endpoint_public_access = true
 # EKS Addons cluster_addons = {   coredns    = {}   kube-proxy = {}   vpc-cni    = {} }
 vpc_id     = module.vpc.vpc_id subnet_ids = module.vpc.private_subnets
 eks_managed_node_groups = {   initial = {     instance_types = [“m5.large”]
     min_size     = 3     max_size     = 10     desired_size = 5   } }
 tags = local.tags}
################################################################################# Kubernetes Addons################################################################################
module “eks_blueprints_kubernetes_addons” { source = “github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.29.0
 eks_cluster_id       = module.eks.cluster_name eks_cluster_endpoint = module.eks.cluster_endpoint eks_oidc_provider    = module.eks.oidc_provider eks_cluster_version  = module.eks.cluster_version
 enable_argocd = true # This example shows how to set default ArgoCD Admin Password using SecretsManager with Helm Chart set_sensitive values. argocd_helm_config = {   set_sensitive = [     {       name  = “configs.secret.argocdServerAdminPassword”       value = bcrypt_hash.argo.id     }   ] }
 keda_helm_config = {   values = [     {       name  = “serviceAccount.create”       value = “false”     }   ] }
 argocd_manage_add_ons = true # Indicates that ArgoCD is responsible for managing/deploying add-ons argocd_applications = {   addons = {     path               = “chart”     repo_url           = “https://github.com/aws-samples/eks-blueprints-add-ons.git”     add_on_application = true   }   workloads = {     path               = “envs/dev”     repo_url           = “https://github.com/aws-samples/eks-blueprints-workloads.git”     add_on_application = false   } }
 # Add-ons enable_amazon_eks_aws_ebs_csi_driver = true enable_aws_for_fluentbit             = true # Let fluentbit create the cw log group aws_for_fluentbit_create_cw_log_group = false enable_cert_manager                   = true enable_cluster_autoscaler             = true enable_karpenter                      = true enable_keda                           = true enable_metrics_server                 = true enable_prometheus                     = true enable_traefik                        = true enable_vpa                            = true enable_yunikorn                       = true enable_argo_rollouts                  = true
 tags = local.tags}
#—————————————————————# ArgoCD Admin Password credentials with Secrets Manager# Login to AWS Secrets manager with the same role as Terraform to extract the ArgoCD admin password with the secret name as “argocd”#—————————————————————resource “random_password” “argocd” { length           = 16 special          = true override_special = “!#$%&*()-_=+[]{}<>:?”}
# Argo requires the password to be bcrypt, we use custom provider of bcrypt,# as the default bcrypt function generates diff for each terraform planresource “bcrypt_hash” “argo” { cleartext = random_password.argocd.result}
#tfsec:ignore:aws-ssm-secret-use-customer-keyresource “aws_secretsmanager_secret” “argocd” { name                    = “argocd” recovery_window_in_days = 0 # Set to zero for this example to force delete during Terraform destroy}
resource “aws_secretsmanager_secret_version” “argocd” { secret_id     = aws_secretsmanager_secret.argocd.id secret_string = random_password.argocd.result}


The provided Terraform code block demonstrates the setup of an AWS EKS (Elastic Kubernetes Service) cluster and the deployment of various Kubernetes addons using Helm charts. The following is an overview of the key components and their functionalities:

EKS Cluster

The “eks” module is used to define the EKS cluster’s configuration parameters. It specifies the cluster name, version, and accessibility settings. Additionally, it includes the declaration of EKS add-ons such as CoreDNS, kube-proxy, and VPC-CNI. The module also incorporates managed node groups to define the desired instance types and scaling settings.

  • module “eks”: Creates the EKS cluster using the terraform-aws-modules/eks/aws module.
  • cluster_name and cluster_version: Specifies the name and version of the EKS cluster.
  • vpc_id and subnet_ids: Provides the VPC ID and subnet IDs for the EKS cluster.
  • eks_managed_node_groups: Configures the desired EC2 instances for the EKS cluster.

Kubernetes Addons

The “eks_blueprints_kubernetes_addons” module is employed to deploy various Kubernetes addons and configure them for the EKS cluster. 

  • module “eks_blueprints_kubernetes_addons”: Deploys various Kubernetes addons using Helm charts.

These add-ons include:

  • ArgoCD: A GitOps tool that enables continuous deployment of applications and infrastructure. In this example, ArgoCD is enabled and configured with sensitive values for the admin password using SecretsManager and Helm chart configurations.
  • Keda: A Kubernetes-based event-driven autoscaling component that enables dynamic scaling of workloads. It is configured to disable the creation of a service account.
  • Additional Addons: Various other addons are enabled, such as Amazon EBS CSI driver, AWS for Fluent Bit, Cert Manager, Cluster Autoscaler, Karpenter, KEDA, Metrics Server, Prometheus, Traefik, VPA (Vertical Pod Autoscaler), Yunikorn, and Argo Rollouts.

Secrets Management

To manage the ArgoCD admin password securely, the code includes the generation of a random password using “random_password” and a bcrypt hash using a custom provider called “bcrypt_hash”. Additionally, AWS Secrets Manager is utilized to store and manage the secret “argocd” with the ArgoCD admin password.

Note: The explanations in this article provide a high-level overview of the code. Users may need to modify the code and configurations based on their specific requirements.

Conclusion

AWS EKS Blueprints serve as a powerful tool to simplify and expedite Kubernetes deployments on Amazon Elastic Kubernetes Service. By leveraging pre-configured templates, users can effortlessly set up and configure popular Kubernetes applications like ArgoCD or monitoring stacks. 

Leveraging the streamlined setup, adherence to best practices, and reproducibility offered by EKS Blueprints can greatly enhance Kubernetes deployments. However, it is essential to assess customization needs and consider the dependency on AWS EKS when determining the suitability of EKS Blueprints for specific use cases.

About TrackIt

TrackIt is an Amazon Web Services Advanced Tier Services Partner specializing in cloud management, consulting, and software development solutions based in Marina del Rey, CA. 

TrackIt specializes in Modern Software Development, DevOps, Infrastructure-As-Code, Serverless, CI/CD, and Containerization with specialized expertise in Media & Entertainment workflows, High-Performance Computing environments, and data storage.

In addition to providing cloud management, consulting, and modern software development services, TrackIt also provides an open-source AWS cost management tool that allows users to optimize their costs and resources on AWS.