Table of contents
- Introduction
- Starting point
- Target platform
- Automation Tools
- Terraform to provision infrastructure
- Create VPC and networking layer
- Create EC2 Instances
- Static vs. dynamic IP address vs. internal DNS
- Resource tagging
- A load balancer for Kubernetes API
- Security
- Self-signed Certificates
- Known simplifications and limitations
- Next steps
One of the most efficient projects I was ever tasked with from a client was building a complete CICD pipeline with Terraform automating deployment to Kubernetes with Ansible. Let us talk about that today.
Photo by Ryan KLAUS on Unsplash
This is Part one of a 3 part series.
Introduction
This article is the first of a series of three tutorial articles introducing a sample, tutorial project, demonstrating how to provision Kubernetes on AWS from scratch, using Terraform and Ansible.
The purpose of this series of articles is presenting a simple, but realistic example of how to provision a Kubernetes cluster on AWS, using Terraform and Ansible. This is an educational tool, not a production-ready solution nor a simple way to quickly deploy Kubernetes.
I will walk through the sample project, describing the steps to automatize the process of provisioning simple Kubernetes setup on AWS, trying to make clear most of the simplifications and corners cut. To follow it, you need some basic understanding of AWS, Terraform, Ansible and Kubernetes.
The complete working project is available here: https://github.com/apotitech/k8s-terraform-ansible-project. Please, read the documentation included in the repository, describing requirements and step by step process to execute it.
Starting point
Our Kubernetes setup inspired by Kubernetes the Hard way, by Kelsey Hightower from Google. The original tutorial is for Google Cloud. My goal is demonstrating how to automate the process on AWS, so I “translated” it to AWS and transformed the manual steps into Terraform and Ansible code.
[update] Kelsey Hightower recently updated his tutorial to support AWS. I’m happy to see his AWS implementation is not very different from mine
I’ve updated the source code and the post, adding few additional elements to match the new version of his tutorial. Additions are highlighted in blue.
Target platform
Our target Kubernetes cluster includes:
3 EC2 instances for Kubernetes Control Plane: Controller Manager, Scheduler, API Server.
3 EC2 instances for the HA etcd cluster.
3 EC2 instances as Kubernetes workers (aka Minions or Nodes, in Kubernetes jargon)
Container networking using Kubenetes plugin (relying on CNI)
HTTPS communication between all components
All Kubernetes and etcd components run as services directly in the VM (not in Containers).
Automation Tools
I used the pair Terraform and Ansible for many reasons:
Terraform declarative approach works very well in describing and provisioning infrastructure resources, while it is very limited when you have to install and configure;
Ansible procedural approach is very handy for installing and configuring software on heterogeneous machines, but become hacky when you use it for provisioning the infrastructure;
Terraform+Ansible is the stack we often use for production projects at Apoti Tech.
Terraform and Ansible overlap. I will use Terraform to provision infrastructure resources, then pass the baton to Ansible, to install and configure software components.
Terraform to provision infrastructure
Terraform allow us to describe the target infrastructure; then it takes care to create, modify or destroy any required resource to match our blueprint. Regardless its declarative nature, Terraform allows some programming patterns. In this project, resources are in grouped in files; constants are externalized as variables and we will use of templating. We are not going to use Terraform Modules.
The code snippets have been simplified. Please refer to the code repository for the complete version.
Create VPC and networking layer
After specifying the AWS provider and the region (omitted here), the first step is defining the VPC, the single subnet and an Internet Gateway.
resource "aws_vpc" "kubernetes" {
cidr_block = "10.43.0.0/16"
enable_dns_hostnames = true
}
resource "aws_subnet" "kubernetes" {
vpc_id = "${aws_vpc.kubernetes.id}"
cidr_block = "10.43.0.0/16"
availability_zone = "eu-west-1a"
}
To make the subnet public, let’s add an Internet Gateway and route all outbound traffic through it:
resource "aws_internet_gateway" "gw" {
vpc_id = "${aws_vpc.kubernetes.id}"
}
resource "aws_route_table" "kubernetes" {
vpc_id = "${aws_vpc.kubernetes.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.gw.id}"
}
}
resource "aws_route_table_association" "kubernetes" {
subnet_id = "${aws_subnet.kubernetes.id}"
route_table_id = "${aws_route_table.kubernetes.id}"
}
We also have to import the Key-pair that will be used for all Instances, to be able to SSH into them. The Public Key must correspond to the Identity file loaded into SSH Agent (please, see the README for more details).
resource "aws_key_pair" "default_keypair" {
key_name = "my-keypair"
public_key = "ssh-rsa AA....zzz"
}
Create EC2 Instances
I’m using an official AMI for Ubuntu 16.04 to keep it simple. Here is, for example, the definition of etcd
instances.
resource "aws_instance" "etcd" {
count = 3
ami = "ami-1967056a" // Unbuntu 16.04 LTS HVM, EBS-SSD
instance_type = "t2.micro"
subnet_id = "${aws_subnet.kubernetes.id}"
private_ip = "${cidrhost("10.43.0.0/16", 10 + count.index)}"
associate_public_ip_address = true
availability_zone = "eu-west-1a"
vpc_security_group_ids = ["${aws_security_group.kubernetes.id}"]
key_name = "my-keypair"
}
Other instances, controller
and worker,
are no different, except for one important detail: Workers have source_dest_check = false
to allow sending packets from IPs not matching the IP assigned to the machine by AWS (for inter-Container communication).
We also define IAM Role, Role Policy and Instance profile for Controller instances. They will be actually required if you extend the example, adding a proper AWS integration.
Static vs. dynamic IP address vs. internal DNS
Instances have a static private IP address. I’m using a simple address pattern to make them human-friendly: 10.43.0.1x
are etcd instances, 10.43.0.2x
Controllers and 10.43.0.3x
Workers (aka Kubernetes Nodes or Minions), but this is not a requirement.
A static address is required to have a fixed “handle” for each instance. In a big project, you have to create and maintain a “map” of assigned IPs and be careful to avoid clashes. It sounds easy, but it could become messy in a big project. On the flip side, dynamic IP addresses change if (when) VMs restart for any uncontrollable event (hardware failure, the provider moving to different physical hardware, etc.), therefore DNS entry must be managed by the VM, not by Terraform… but this a different story.
Real-world projects use internal DNS names as stable handles, not static IP. But to keep this project simple, I will use static IP addresses, assigned by Terraform, and no DNS.
Installing Python 2.x?
Ansible requires Python 2.5+ on managed machines. Ubuntu 16.04 comes with Python 3 that not compatible with Ansible and we have to install it before running any playbook.
Terraform has a remote-exec
provisioner. We might execute apt-get install python...
on the newly provisioned instances. But the provisioner is not very smart. So, the option I adopted is making Ansible “pulling itself over the fence by its bootstraps“, and install Python with a raw
module. We’ll see it in the next article.
Resource tagging
Every resource has multiple tags assigned (omitted in the snippets, above):
ansibleFilter
: fixed for all instances (“Kubernetes01
” by default), used by Ansible Dynamic Inventory to filter instances belonging to this project (we will see Dynamic Inventory in the next article).ansibleNodeType
: define the type (group) of the instance, e.g. “worker
“. Used by Ansible to group different hosts of the same type.ansibleNodeName
: readable, unique identifier of an instance (e.g. “worker2
“). Used by Ansible Dynamic Inventory as replacement of hostname.Name
: identifies the single resource (e.g. “worker-2
“). No functional use, but useful on AWS console.Owner
: your name or anything identifying you, if you are sharing the same AWS account with other teammates. No functional use, but handy to filter your resources on AWS console.
resource "aws_instance" "worker" {
count = 3
...
tags {
Owner = "apotitech"
Name = "worker-${count.index}"
ansibleFilter = "Kubernetes01"
ansibleNodeType = "worker"
ansibleNodeName = "worker${count.index}"
}
}
}
A load balancer for Kubernetes API
For High Availability, we have multiple instances running Kubernetes API server and we expose the control API using an external Elastic Load Balancer.
resource "aws_elb" "kubernetes_api" {
name = "kube-api"
instances = ["${aws_instance.controller.*.id}"]
subnets = ["${aws_subnet.kubernetes.id}"]
cross_zone_load_balancing = false
security_groups = ["${aws_security_group.kubernetes_api.id}"]
listener {
lb_port = 6443
instance_port = 6443
lb_protocol = "TCP"
instance_protocol = "TCP"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 15
target = "HTTP:8080/healthz"
interval = 30
}
}
The ELB works at TCP level (layer 4), forwarding connection to the destination. The HTTPS connection is terminated by the service, not by the ELB. The ELB need no certificate.
Security
The security is very simplified in this project. We have two Security Groups: one for all instances and another for the Kubernetes API Load Balancer (some rule omitted here).
All instances are directly accessible from outside the VPC: not acceptable for any production environment. But security is not entirely lax: inbound traffic is allowed only from one IP address: the public IP address you are connecting from. This address is configured by the Terraform variable control_cidr
. Please, read the project documentation for further explanations.
No matter how lax, this configuration is tighter than the default security set up by Kubernetes cluster creation script.
Self-signed Certificates
Communication between Kubernetes components and control API, all use HTTPS. We need a server certificate for it. It is self-signed with our own private CA. Terraform generates a CA certificate, a server key+certificate and signs the latter with the CA. The process uses CFSSL.
The interesting point here is the template-based generation of certificates. I’m using Data Sources, a feature introduced by Terraform 0.7.
All components use the same certificate, so it has to include all addresses (IP and/or DNS names). In a real-world project, we would use stable DNS names, and the certificate would include them only.
CFSSL command line utility generates .pem
files for CA certificate, server key and certificate. In the following articles, we will see how they are uploaded into all machines and used by Kubernetes CLI to connect to the API.
Known simplifications and limitations
Let’s sum up the most significant simplifications introduced in this part of the project:
All instances are in a single, public subnet.
All instances are directly accessible from outside the VPC. No VPN, no Bastion (though, traffic is allowed only from a single, configurable IP).
Instances have static internal IP addresses. Any real-world environment should use DNS names and, possibly, dynamic IPs.
A single server certificate for all components and nodes.
Next steps
The infrastructure is now complete. There is still a lot of work to do, for installing all the components required by Kubernetes. We will see how to do it, using Ansible, in the next article.
Please stay tuned and subscribe for more articles and study materials on DevOps, Agile, DevSecOps and App Development. Click here for the next article.
#YouAreAwesome #StayAwesome
Please if this is helpful, do click here to buy me a coffee.