Terraform and Kubernetes on Azure — AKS

Terraform and Kubernetes on Azure — AKS

Using Terraform to deploy our applications on Azure AKS

This is part 2 of 4 of the Creating Kubernetes clusters with Terraform series.

  • Provisioning Kubernetes clusters on AWS with Terraform

  • Getting started with Terraform and Kubernetes on Azure AKS

  • Provisioning Kubernetes clusters on GCP with Terraform and GKE

  • Provisioning Kubernetes clusters on Linode with Terraform

Tonight we will be looking at Provisioning Kubernetes clusters on Azure with AKS and Terraform.

TL;DR

In this article, you will learn how to create Kubernetes clusters on Azure Kubernetes Service (AKS)with the Azure CLI and Terraform. By the end of the tutorial, you will automate creating two clusters (dev and prod) complete with an Ingress controller in a single command.

Introduction to Azure AKS

Azure offers a managed Kubernetes service where you can request a cluster, connect to it, and use it to deploy applications.

Azure Kubernetes Service (AKS) is a managed Kubernetes service, which means that the Azure platform is fully responsible for managing the cluster control plane.

In particular, AKS:

  • Manages Kubernetes API servers and the etcd database.

  • Runs the Kubernetes control-plane single or in multiple availability zones.

  • Scales the control-plane as you add more nodes to your cluster.

  • Provides a mechanism to upgrade your control plane and nodes to a newer — version.

  • Rotates certificates and keys.

However, when you use AKS, you outsource managing the control plane to Azure at no cost.

Yes, that is correct.

On Azure running the AKS incurs no cost for the control plane — you only pay for what you use by the worker nodes.

However, if you want a guaranteed Service Level Agreements of 99.95% uptime or higher, the additional cost is USD0.10 per hour per cluster.

Microsoft Azure has 12-month free tier promotion for its popular services as well as free USD200 credit to freely spend on any service in the next 30 days after your registration.

If you use the free tier offer, you will not incur any additional charges when following this tutorial.

The rest of the guide assumes that you have an account on Microsoft Azure.

If you don’t, you can sign up here.

Lastly, if you prefer to look at the code,you can do so here.

There are three popular options to run and deploy an AKS cluster:

  1. You can create a cluster from the AKS web interface.

  2. You can use the Azure command-line utility.

  3. You can define the cluster using code with a tool such as Terraform.

Even if it is listed as the first option, creating a cluster through the Azure portal is discouraged.

There are plenty of configuration options and screens that you have to complete before using the cluster.

When you create the cluster manually, can you be sure that:

  • You did not forget to change one of the parameters?

  • You can repeat precisely the same steps while creating a cluster for other environments?

  • When there is a change, you can apply the same modifications in sequence to all clusters without any mistake?

The process is error-prone and doesn’t scale well if you have more than a single cluster.

A better option is defining a file containing all the configuration flags and using it as a blueprint to create the cluster.

And that’s precisely what you can do with the Azure CLI and infrastructure as code tools such as Terraform.

Setting up the Azure account

Before you start creating clusters and utilizing Terraform, you have to install the Azure CLI.

You can find the official documentation on installing the Azure CLI here.

After you install the Azure CLI, you should run:

If you can see the above output, that means the installation is successful.

Next, you need to link your account to the Azure CLI, and you can do this with:

$ az login

This will open a login page where you can authenticate with your credentials.

Once completed, you should see the “You have logged in. Now let us find all the subscriptions to which you have access…” and your subscription output in JSON format.

Now before continuing, you can find all of the available regions that AKS supports here.

You can now try listing all your AKS clusters with:

$ az aks list
[]

$

An empty list.

That makes sense since you haven’t created any clusters yet.

Azure CLI: the quickest way to provision an AKS cluster

Azure provides a featured all-in-one CLI tool, meaning you won’t need to install any additional software or tools to manage and create clusters.

Let’s explore the tool with:

Notice the required arguments for creating a cluster: the name and resource group.

Scrolling down further in the output, you will see the sheer number of examples provided with the --help argument.

You’ve noticed that apart from the cluster requiring a name, you will also need to provide a resource group in the arguments.

What are resource groups, and why do I need them?

Not to worry, I will explain this in the next section.

Resource Groups in Azure

Resource Group(s) in Azure are containers that logically hold multiple resources.

You can use Resource Groups to bundle all the resources such as Load Balancers, NICs, Subnets, etc., in the same group — giving you a more accessible option to manage everything in separate environments.

You can list all the resource groups with:

$ az group list

But since you haven’t created any, you will get an empty response.

Let’s fix that and create a resource group where the cluster will be made.

A Resource Group will need a name and location where to be created:

$ az group create --name learnk8sResourceGroup --location northeurope

Note: To easily list all the available locations in a table format, you can do so with:

After issuing the az group create command, you should now see in the output "provisioningState": "Succeeded".

Entering the az group list command will provide you with the same output.

Before moving forward, you will need to register aresource provider. Otherwise, if you try and create the cluster without first defining it, the command will fail.

$ az provider register -n Microsoft.ContainerService

You can now create the AKS cluster with:

$ az aks create -g learnk8sResourceGroup -n learnk8s-cluster --generate-ssh-keys --node-count 2

Let’s have a look at the flags:

  1. The --generate--ssh--keys argument is required if you are not supplying your SSH keys.

  2. The --node-count is required to stay under the quota limit. If needed, you can send a request to Azure for increasing the limits.

Be patient; the cluster can take up to 15 minutes to be created.

While you are waiting for the cluster to be provisioned, you should go ahead and download kubectl — the command-line tool to connect and manage the Kubernetes cluster.

Kubectl can be downloaded from here.

You can check that the binary is installed successfully with:

kubectl version --client  
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2"

Once the cluster is created, you will get a JSON output with its specs.

The cluster will be created with the following values:

  • Location: North Europe.

  • Node count: 2 nodes.

  • Autoscaling: off.

  • Instance type: Standard_DS2_v2 (CPU: 2; RAM: 7GB).

  • Disk per node: 128GB.

You can always choose different settings if the above isn’t what you had in mind.

You can now fetch the credentials with:

$ az aks get-credentials --resource-group learnk8sResourceGroup --name learnk8s-cluster Merged "learnk8s-cluster" as current context in /home/ubuntu/.kube/config

And verify that you can access the AKS cluster with kubectl:

If needed, you can modify the cluster with the az aks update command.

The complete list of*az aks <command>*commands is availableon the official Azure CLI documentation.

As an example, you can enable autoscaling and set the minimum and a maximum number of nodes with:

$ az aks update \  
  --resource-group learnk8sResourceGroup \  
  --name learnk8s-cluster \  
  --enable-cluster-autoscaler \  
  --min-count 1 \  
  --max-count 2

Be patient and wait for the update to finish.

Once it’s done, you can find in the output that enableAutoScaling is set to true.

"agentPoolProfiles": [  
    {  
      "availabilityZones": null,  
      "count": 2,  
      "enableAutoScaling": true,  
      "enableNodePublicIp": false,  
[output truncated]

To verify and get more detailed info, you can use az aks show with the -o yaml for easier reading:

$ az aks show --name learnk8s-cluster --resource-group learnk8sResourceGroup -o yaml

Voila! With this, you have successfully created and updated an AKS cluster through the Azure CLI!

You can delete the cluster and resource group now, as you will learn another way to deploy and manage them.

$ az aks delete --name learnk8s-cluster --resource-group learnk8sResourceGroup

You can also delete the resource group with:

$ az group delete --resource-group learnk8sResourceGroup

On the prompts, just confirm with y and wait for the operations to finish.

All the resources in that resource group will be deleted as well.

Provisioning an AKS cluster with Terraform

Terraform is an open-source Infrastructure as a Code tool.

Instead of writing the code to create the infrastructure, you define a plan of what you want to be executed, and you let Terraform create the resources on your behalf.

The plan isn’t written in YAML, though.

Instead Terraform uses a language called HCL — HashiCorp Configuration Language.

In other words, you use HCL to declare the infrastructure you want to be deployed, and Terraform executes the instructions.

Terraform uses plugins called providers to interface with the resources in the cloud provider.

This further expands with modules as a group of resources and are the building blocks you will use to create a cluster.

But let’s take a break from the theory and see those concepts in practice.

Before you can create a cluster with Terraform, you should install the binary.

You can find the instructions on how to install the Terraform CLI from the official documentation.

Verify that the Terraform tool has been installed correctly with:

$ terraform version  
Terraform v0.14.6

Before diving into the code, there are few prerequisites needed to be done.

Terraform uses a different set of credentials to provision the infrastructure, so you should create those first.

You will first need to get your subscription ID.

$ az account list  
# OR, more advanced way to get it:  
$ az account list |  grep -oP '(?<="id": ")[^"]*'

Make a note now of your subscription id.

If you have more than one subscription, you can set your active subscription with*az account set --subscription="SUBSCRIPTION_ID"*. You still need to make a note of your subscription id.

Terraform needs a Service Principal to create resources on your behalf.

You can think of a Service Principal as a user identity (login and password) with a specific role and tightly controlled permissions to access your resources.

It could have fine-grained permissions such as only to create virtual machines or read from particular blob storage.

In your case, you need a Contributor Service Principal — enough permissions to create and delete resources.

You can create the Service Principal with:

$ az ad sp create-for-rbac \  
  --role="Contributor" \  
  --scopes="/subscriptions/SUBSCRIPTION_ID"

The previous command should print a JSON payload like this:

Make a note of the appId, password, and tenant. You need those to set up Terraform.

Export the following environment variables:

$ export ARM_CLIENT_ID=
$ export ARM_SUBSCRIPTION_ID=
$ export ARM_TENANT_ID=
$ export ARM_CLIENT_SECRET=

Creating a resource group with Terraform
Let’s create the most straightforward Terraform file.

Don’t worry if you are not familiar with the Terraform code; I will explain everything in a minute.

Now create a file named main.tf with the following content:

You will notice something familiar. This time you will utilize Terraform code to create a resource group.

Terraform commands

In the same directory run:

$ terraform init

The command will initialize Terraform and execute a couple of crucial tasks.

  1. It downloads the Azure provider that is necessary to translate the Terraform instructions into API calls.

  2. It will create two more folders as well as a state file. The state file is used to keep track of the resources that have been created already.

Consider the files as a checkpoint; without them, Terraform won’t know what has been already created or updated.

If you further want to validate if the configuration is correct, you can do so with the*terraform validate*command.

You’re now ready to create your resource group using Terraform.

Two commands are frequently used in succession.

The first is:

$ terraform plan  
[output truncated]  
Plan: 1 to add, 0 to change, 0 to destroy.

Terraform will perform a dry-run and will prompt you with a detailed summary of what resources are about to create.

It’s always a good idea to double-check what happens to your infrastructure before you commit the changes.

You don’t want to accidentally destroy a database because you forgot to add or remove a resource.

Once you are happy with the changes, you can create the resources for real with:

$ terraform apply  
[output truncated]  
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

On the prompt, confirm with yes, and Terraform will create the Resource Group.

Congratulations, you just used Terraform to provision a resource!

You can imagine that by adding more block resources, you can create more components in your infrastructure.

You can have a look at all the resources that you could create in the left column of the official provider page for Azure.

Please note that you should have sufficient knowledge of Azure and its resources to understand how components can be plugged in together. The documentation provides excellent examples, though.

Before you provision a cluster, let’s clean up the existing resources.

You can delete the resource group with:

$ terraform destroy  
[output truncated]  
Apply complete! Resources: 0 added, 0 changed, 1 destroyed.

Terraform prints a list of resources that are ready to be deleted, and as soon as you confirm, it destroys all the resources.

Terraform step by step

Create a new folder with the following files:

  • main.tf - to store the actual code for the cluster.

  • outputs.tf - to define the outputs.

In the main.tf file, copy and paste the following code:

And in the outputs.tf add the following:

Since there aren’t many variables to define, creating a separate*variables.tf*file will be skipped for now.

That’s a lot of code!

But don’t worry, I will explain everything as soon as you create the cluster.

Continue and from the same folder run the commands as before:

To initialize Terraform, use:

$ terraform init

To perform a dry-run and inspect what Terraform will create.

$ terraform plan  
# output truncated  
Plan: 3 to add, 0 to change, 0 to destroy.

Finally, to apply everything and create the resources:

$ terraform apply  
# output truncated  
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

After issuing the apply command, you will be prompted to confirm, and same as before, just type yes.

It’s time for a cup of coffee.

Provisioning a cluster on AKS takes, on average, about fifteen minutes.

When it’s complete, if you inspect the current folder, you should notice a few new files:

tree .  
.  
├── kubeconfig  
├── main.tf  
├── outputs.tf  
├── terraform.tfstate  
└── terraform.tfstate.backup

Terraform uses the terraform.tfstate to keep track of what resources were created.

The kubeconfig is the kube configuration file for the newly created cluster.

Inspect the cluster pods using the generated kubeconfig file:

If you prefer to not prefix the --kubeconfig environment variable to every command, you can export the KUBECONFIG variable as:

$ export KUBECONFIG="${PWD}/kubeconfig"

The export is valid only for the current terminal session.

Now that you’ve created the cluster, it’s time to go back and discuss the Terraform file.

The Terraform file that you just executed is divided into several blocks, so let’s look at each one of them.

The first two blocks of code are the required providers(Terraform v0.13+) and provider.

This is where you define your Terraform configuration with which provider (AWS, GCP, Azure) you will work with, and that must be installed.

The source and versions are self-explanatory.

You define the URL where to download the provider, usually hashicorp/provider and which version from that provider.

If you want to learn more about version constraints, you can take a look here.

In the above resource block, you define which resource you want to be created.

In this case, a Resource Group along with its required parameters.

Finally, there is one more resource definition needed:

Let’s explain in detail what is defined in the code here.

In the first part, the azurerm_kubernetes_cluster is the actual resource, that manages an Azure Kubernetes cluster.

The cluster is the locally given name for that resource that is only to be used as a reference inside the scope of the module.

The name and dns_prefix are used to define the cluster's name and DNS name.

Notice the lengthy values for location and resource_group_name.

Those values have already been defined inside the Resource Group block, but you can retrieve them as attributes.

In the snippet above, the attribute azurerm_resource_group.rg.location resolves to northeurope and resource_group_name to learnk8sResourceGroup.

Referencing attributes is convenient, so you can tweak the value in a single place instead of copying and pasting it everywhere.

In the default_node_pool you are defining the specs for the worker nodes.

In short, Terraform will create a pool named default, consisting of 2 nodes, with an instance type of standard_d2_v2.

Finally, the last block is used to define the type of the identity, which is SystemAssigned.

This means that Azure will automatically create the required roles and permissions, and you won’t need to manage any credentials.

These credentials are tied to the lifecycle of the service instance.

You can read more about the types here.

The outputs.tf, as its name suggests, will output some value that you define in it.

The resource here will create a local file populated with the kube configuration to generate access for the cluster.

The required parameters are filename and the content, which again use local value the kube_config_raw.

The depends_on is not required, but it's best to set it as a precaution.

This is a meta-argument that sets a dependency on something either a resource or module before another code block gets executed.

Here you want the cluster to be created first before fetching the kubeconfig value. Otherwise, Terraform may create an empty file.

The Azure CLI vs Terraform — pros and cons

You can already tell the main differences between the Azure CLI and Terraform:

  • Both create an AKS cluster.

  • Both export a valid kubeconfig file.

  • The configuration with the Azure CLI is more straightforward and more concise.

  • Terraform goes into great detail and is more granular. You have to craft every single resource carefully.

So which one should you use?

For smaller experiments, when you need to spin a cluster quickly, you should consider using the Azure CLI.

With a short command, you can easily create it.

For production-grade infrastructure where you want to configure and tune every single detail of your cluster, you should consider using Terraform.

But there’s another crucial reason why you should prefer Terraform — incremental updates.

Let’s imagine that you want to add a second pool to your cluster.

Perhaps you want to add another — more memory-optimized node pool to your cluster for your memory-hungry applications.

NOTE:To proceed with the changes, you will have to reduce the node count to one from thedefault pool.

As mentioned before, there are resource quotas that limit the CPU cores to 4.

You can edit the file and add the new node pool at the bottom of the config as follows:

Proceed with the previous commands to plan and apply the changes:

$ terraform plan  
Plan: 1 to add, 1 to change, 0 to destroy.

Let us apply our config:

$ terraform apply  
azurerm_kubernetes_cluster.cluster: Modifying...  
[output truncated]  
Apply complete! Resources: 1 added, 1 changed, 0 destroyed.

Be patient for the two operations to finish.

After the operation is done, you can verify that the new node pool has been added with:

Success!! You not only have created a production-ready cluster but also modified it to have an additional node pool.

Now it’s time to deploy an application.

Let’s do that next.

Testing the cluster by deploying a simple Hello World app

You can create a Deployment with the following YAML definition:

You can also find all the files for the demo app here.

***NOTE:*To make it easier to run commands to the cluster without specifying the*--kubeconfig*parameter constantly. You can either export or move the generated*kubeconfig*to*~/.kube/config*.

If you have other configs located there, it’s a good idea to back up that file!

$ mv kubeconfig ~/.kube/config

You can submit the definition to the cluster with:

$ kubectl apply -f deployment.yaml  
deployment.apps/hello-kubernetes created

To see if your application runs correctly, you can connect to it with kubectl port-forward.

First, retrieve the name of the Pod:

You can connect to the Pod with:

Or you can use the trickier option that will automatically get the pod’s name:

The kubectl port-forward command connects to the Pod with the name hello-kubernetes-7f65c7597f-8dn2l.

And forwards all the traffic from port 8080 on the Pod to port 8080 on your computer.

If you visit http://localhost:8080 on your computer, you should be greeted by the application’s web page.

Great!

Exposing the application with kubectl port-forward is an excellent way to test the app quickly, but it isn't a long-term solution.

If you wish to route live traffic to the Pod, you should have a more permanent solution.

In Kubernetes, you can use a Service of type: LoadBalancer to start up a load balancer to expose your Pods.

You can use the following code:

And submit the YAML with:

As soon as you submit the command, AKS provisions an L4 Load Balancer and connects it to your pod.

Eventually, you should be able to describe the Service and retrieve the load balancer’s IP address.

Running:

If you visit the external IP address in your browser, you should see the application.

Excellent!

There’s only an issue, though.

The load balancer that you created earlier serves one service at a time.

Also, it has no option to provide intelligent routing based on paths.

So if you have multiple services that need to be exposed, you will need to create the same number of load balancers.

Imagine having ten applications that have to be exposed.

If you use a Service of type: LoadBalancer for each of them, you might end up with ten different L4 Load Balancers.

That wouldn’t be a problem if those load balancers weren’t so expensive.

How can you get around this issue?

Routing traffic into the cluster with an Ingress

In Kubernetes, another resource is designed to solve that problem: the Ingress.

The Ingress has two parts:

  1. The first is the Ingress object which is the same as Deployment or Service in Kubernetes. This is defined by the kind part of the YAML manifest.

  2. The second part is the Ingress controller. This is the actual part that controls the load balancers, so they know how to serve the requests and forward the data to the Pods.

In other words, the Ingress controller acts as a reverse proxy that routes the traffic to your Pods.

The Ingress routes the traffic based on paths, domains, headers, etc., which consolidates multiple endpoints in a single resource that runs inside Kubernetes.

With this, you can serve multiple services simultaneously from one exposed endpoint — the load balancer.

There’re lots of Ingress controllers that you can choose from:

  1. Nginx Ingress

  2. Ambassador

  3. Traefik

  4. And more.

However, in this part, the AKS has its own add-on that enables the use of Ingress controller.

Azure provides two ways to enable the Ingress in the cluster.

Either through using the Azure CLI or by defining it in Terraform code.

In the next section, you will learn how to use Terraform to enable Azure Ingress.

First, you will be amending the main.tf file to add the required add-on setting to enable the Azure Ingress controller.

You can either get the new main.tf file from here, or copy and paste it through here:

Notice the last part — addon_profile; with this option, you can enable the Ingress controller for the AKS cluster.

Of course, there are other cluster-specific add-ons available as well.

The add-on can be enabled even if the cluster is already created.

After enabling it, you should (plan and) apply the changes.

You will notice a few new Pods in the cluster:

In Azure, the Ingress add-on installs Ingress Nginx.

On top of that, enabling the add-on also installs the ExternalDNS that can be used to manage DNS entries automatically.

As soon as you install the add-on, Azure creates an L4 Load Balancer and configures it to route traffic to the Ingress Nginx.

When you submit an Ingress manifest to Kubernetes, the Ingress controller reconfigures itself to route traffic to that Service (and Pods).

Let’s have a look at an example:

The following Kubernetes Ingress manifest routes all the traffic from path / to the Pods targeted by the hello-kubernetes Service.

As soon as you submit the resource to the cluster, the Ingress controller is notified of the new resource.

As with every Ingress controller, it provides convenience since you can control your infrastructure uniquely from Kubernetes — there’s no need to fiddle with AKS anymore.

Now that you know about the Ingress, you can give it a try and deploy it.

$ kubectl apply -f ingress.yaml

Create the Ingress resource by applying the ingress.yaml manifest from above.

Note*: You should delete the previous Load Balancer Service and instead deploy the* [*service.yaml*](https://github.com/k-mitevski/terraform-aks/blob/master/kubernetes/service.yaml), so you don't end up with duplicate load balancers.

Pay attention to the annotation field, though.

kubernetes.io/ingress.class: addon-http-application-routing is used to select the right Ingress controller.

It may take a few minutes (the first time) for the Ingress to be created, but eventually, you will see the following output if you describe it:

Excellent, you can use the IP from the Address field to visit your application in the browser.

The Ingress add-on is meant as a quick way to install an Ingress and route traffic in the cluster.

However, if you want a more robust solution that can also serve HTTPS traffic, take a look at the following section on how you can deploy Nginx Ingress controller using Helm.

Congratulations!

You’ve managed to deploy a fully working cluster that can route live traffic!

Fully automated Dev & Production environments with Terraform modules

One of the most common tasks when provisioning infrastructure is to create separate environments.

  1. A development environment where you can test your changes and integrate them with other colleagues.

  2. A production environment.

Since you want your apps to progress through the environments, you might want to provision multiple clusters, one for each environment.

When you don’t have infrastructure as code, you are forced to click on the user interface and repeat the same choice.

But since you’ve mastered Terraform, you can refactor your code and create multiple environments with a single command!

The beauty of Terraform is that you can use the same code to generate several clusters with different names.

You can parametrize the name of your resources and create clusters that are exact copies.

You can reuse the existing Terraform code and provision two clusters simultaneously using Terraform modules and expressions.

Before you execute the script, it’s a good idea to destroy any cluster that you created previously with*terraform destroy*.

You saw the mention of Terraform variables earlier, but let’s revisit them in more detail.

The expression syntax is straightforward.

You define variables like this:

You can create and add the definitions in a variables.tf file.

Later, you can reference and chain the variables in the AKS resource like this:

Terraform will interpolate the string to “learnk8scluster-dev”.

When you execute the usual terraform apply command, you can override the variable with a different name.

For example:

$ terraform apply -var="cluster_name=my-cluster" -var="env_name=staging"

This will provision a cluster with the name of my-cluster-staging.

In isolation, expressions are not particularly useful.

Let’s have a look at an example.

If you execute the following commands, what do you expect?

Is Terraform creating two clusters or updates the dev cluster to a staging cluster?

$ terraform apply -var="env_name=dev"
# and later
$ terraform apply -var="env_name=staging"

The code updates the dev cluster to a staging cluster.

It’s overwriting the same cluster!

But what if you wish to create a copy?

You can use the Terraform modules to your advantage.

Terraform modules use variables and expressions to encapsulate resources.

Move your main.tf, variables.tf, and outputs.tf in a subfolder, such as another folder named main, and create an empty main.tf.

From now on, you can use the code that you’ve created as a reusable module.

In the subfolder, where the main.tf file is located, append the env_name variable to the Resource Group.

In this way way you can differentiate and create different clusters in separate resource groups:

Now you can reference all the code from the main.tf resource like this:

And since the module is reusable, you can create more than a single cluster:

You can preview the changes with:

$ terraform plan  
# output truncated  
Plan: 6 to add, 0 to change, 0 to destroy.

***Warning!***Before applying, you should be aware that there are quota limits on the free tier account for AKS, as mentioned before. The most critical is tied to the CPU core quota. To circumvent that for this tutorial purposes, the Terraform code for running multiple clusters is changed to deploy two clusters with single node pools instead of the usual three.

You can apply the changes and create two clusters that are exact copies with:

$ terraform apply  
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

The two clusters have the AKS Ingress add-on enabled automatically, so they can handle external traffic.

What happens when you update the cluster module?

When you modify a property, Terraform will update all clusters with the same property.

If you wish to customize the properties on a per-environment basis, you should extract the parameters in variables and change them from the root main.tf.

Let’s have a look at an example.

You might want to run the same instance type such as standard_d2_v2 in the dev environment but change to standard_d11_v2 instance type for production.

As an example you can refactor the code and extract the instance type as a variable:

variable "instance_type" {
default = "standard_d2_v2"
}

And add the corresponding change in the Azure resource like:

#...
default_node_pool {
name = "default"
node_count = "1"
vm_size = var.instance_type
}

#...

Notice the variable definition; since we aren’t chaining two or more variables, there is no need to declare it with*${}*. Instead, you can define it directly with*var.variable_name*.

Later, you can modify the root main.tf file with the instance type:

If you wish, you can apply the changes and verify each cluster with its corresponding kubeconfig file.

Excellent!

As you can imagine, you can add more variables to your module and create environments with different configurations and specifications.

This marks the end of your journey!

A recap on what you’ve built so far:

  • You’ve created an AKS cluster both using the Azure CLI and Terraform

  • You used the AKS add-on to enable Ingress, define a resource, and route live traffic.

  • You parameterized the cluster configuration and created a reusable module.

  • You used the module as an extension to provision multiple copies of your cluster (dev and production).

  • You made the module more flexible by allowing minor customizations such as changing the instance type.

  • Well done!

That’s all folks!

If you enjoyed this article, you might find the next articles interesting.

Last week we deployed to our AWS EKS cluster using Terraform. Today, we continued our Terraform series and created our K8s cluster on Azure using AKS and Terraform.

Kudos for getting all the way to the end of this article.

Stay tuned :) as we go to the next deployment, using GKE.

Feel free to reach out should you have any questions.

#YouAreAwesome #StayAwesome

Stay tuned and followmefor more updates. Don’t forget to give us your 👏 if you enjoy reading the article as a support to your author!!

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Developers: Learn and grow by keeping up with what matters, JOIN FAUN.