Complex ACS Engine Deployments using Terraform and Helm

I’ve been making fairly heavy use of Kubernetes on Azure of late, and this has been mostly done using ACS Engine to create the clusters. I need to make use of Windows clusters which is not supported in the AKS PaaS service, so I need to create a custom cluster using ACS-Engine.

For those who don’t know, ACS engine is an open source tool, created by Microsoft, that helps generate ARM templates to deploy customer Kubernetes clusters in Azure. You can read all about it my previous post.

So, the obvious question, if ACS engine generates ARM templates for creating Kubernetes clusters, why do we need to use Terraform? Well ACS does generate these ARM templates, and if all you want to do is deploy the cluster, then you can run them and your good. The problem I was facing is that I needed to do some other things before and after I deployed the cluster, things that ARM couldn’t handle (we#ll discuss these in detail shortly). Now I could do most of this manually, and that is what I did for a while. However, Windows containers are still in the pretty early stages. There are lots of features being added both to the containers, and to the way Kubernetes and ACS Engine deals with them, and this required regularly rebuilding the cluster, or creating new ones to test features. These manual tasks become a real pain, and so I started to look at using Terraform. Terraform is a tool for working with deployments of multiple platforms, both cloud providers and things like orchestrators and had the flexibility to allow me to automate all the task I needed to do when creating a cluster.

You can find copies of the files created in this article on Github here -https://github.com/sam-cogan/Demos/tree/master/acs-engine-terraform

Warning - The process used in this template described below is relatively complicated and has many workarounds in it, you should be happy that you understand these and are OK with their use before using these scripts. These workarounds are not ideal, and if anyone has suggestions to improve the process, then I I am happy to incorporate, please just let me know.

##Requirements

###Pre-Deployment Before actually deploying the Kubernetes cluster, I needed to do two things.

  1. Deploy a resource group - This seems relatively obvious, but there is no way currently in ARM to deploy a resource group, you have to do it manually before you run the deployment.
  2. Apply permissions to the resource group. If you don’t grant the service principal that your Kubernetes cluster is working under permissions to the resource group, before deployment, then this results in some system pods not getting deployed.

###Post-Deployment

Once the cluster has deployed there are some tasks I need to undertake before it is ready for use:

  1. Taint Windows Nodes - I am running a mixed Linux and Windows cluster. I need to avoid Linux containers trying to run on Windows nodes. Ideally, we would do this with an OS selector, but some of the Helm packages I use don’t currently support passing this. To avoid this issue, I taint the Windows nodes so that any packages that need to go on them need specific tolerations.
  2. Create Daemon set- There is an open issue with Windows on ACS that requires a pod to be always running on each node, so I create a Daemon set at deploy time
  3. Install Open Service Broker for Azure (OSBA) - I’m using OSBA to be able to deploy Azure resources such as Azure SQL as part of my Helm files. OSBA requires the deployment of a couple of Helm packages to the cluster with specific settings.
  4. Traefik - I’m using Traefik as my reverse proxy to handle ingress, so this needs to be running to accept inbound connects. Again, this requires running a Helm package with specific settings.

As you can see, these steps could all be done manually. However, as you keep doing them, they become pretty irritating and so an automated solution is needed. ARM won’t help here, the only thing it could deal with is permissions and NSG rules, so we’re looking to Terraform to help us get to a fully automated deployment.

##Pre-requisites

While Terraform is cross-platform, the solution we are building today only runs on Linux/Mac; this is mainly down to the fact that the Helm provider we are using has Linux and Mac binaries only, not Windows. If you are using Windows, this does all work fine inside the Windows Subsystem for Linux (WSL). WSL is what I will be using for the rest of the demo. If you don’t want to use the Helm parts then with some tweaking, it could probably run on Windows.

To be able to follow the steps in this article you need the following pre-requisites installed in your Linux/WSL environment:

##Terraform Deployment

If you’re not familiar with Terraform, I would recommend reading my article on using Terraform with Azure

###Resource Group

Creating the resource group and setting permissions for the service principal running Kubernetes is just standard Terraform, using the Azure Terraform provider.

First thing is to setup the AzureRM provider with the credentials needed to work with Azure. As you can see I am passing these in as Terraform variables (see here if you need details on how to work with variables):

provider "azurerm" {
  subscription_id = "${var.subscription_id}"
  tenant_id       = "${var.tenant_id}"
  client_id       = "${var.deployment_client_ID}"
  client_secret   = "${var.deployment_client_password}"
}

Once that is all setup we can go ahead and create the resource group and then grant our SP rights to it using the AzureRM provider.

resource "azurerm_resource_group" "acsResourceGroup" {
  name     = "${var.rgName}"
  location = "${var.region}"
}

resource "azurerm_role_assignment" "assignACSUser" {
  scope                = "${azurerm_resource_group.acsResourceGroup.id}"
  role_definition_name = "Contributor"
  principal_id         = "${var.acsAccountID}"
}

###ACS Engine Generate

Now that we have our resource group setup, we need to look at running ACS Engine. ACS Engine generates the ARM templates to deploy the Kubernetes resources. By running ACS engine in our deployment process, we can make any changes to our ACS configuration before deployment and then generate up to date scripts. If you’re not familiar with how ACS engine works, take a look at my article on creating custom Kubernetes clusters with ACS engine.

A number of the examples here have been taken from the work done by Daniel Rhodes on his Github Repo

To be able to run ACS engine and generate the scripts we need to have a configuration template. We are going to take the template we created in the article above and update it to use Terraform variables, so we are storing all our configuration in Terraform. The template is just a json file, with variables added. Store this file alongside our Terraform files.


{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "${orchestrator_version}",
      "kubernetesConfig": {
        "enableRBAC": true,
        "enableAggregatedAPIs": true
      }
    },
    "masterProfile": {
      "count": ${master_vm_count},
      "dnsPrefix": "${dns_prefix}",
      "vmSize": "${master_vm_size}",
      "OSDiskSizeGB": 64,
      "storageProfile" : "ManagedDisks"
    },
    "agentPoolProfiles": [
      {
          "name": "linuxpool",
          "count": ${linux_worker_vm_count},
          "vmSize": "${linux_vm_size}",
          "availabilityProfile": "AvailabilitySet",
          "OSDiskSizeGB": 64,
          "storageProfile" : "ManagedDisks"
      },
      {
          "name": "windowspool",
          "count": ${windows_worker_vm_count},
          "vmSize": "${windows_vm_size}",
          "availabilityProfile": "AvailabilitySet",
          "OSDiskSizeGB": 128,
          "storageProfile" : "ManagedDisks",
          "osType": "Windows"
      }
    ],
    "windowsProfile": {
      "adminUsername": "${admin_user_name}",
      "adminPassword": "${admin_password}",
      "WindowsSku": "${windows_sku}"
    },
    "linuxProfile": {
      "adminUsername": "${admin_user_name}",
      "ssh": {
        "publicKeys": [
          {
            "keyData": "${ssh_key}"
          }
        ]
      }
    },
    "servicePrincipalProfile": {
      "clientId": "${client_ID}",
      "secret": "${client_secret}"
    }
  }
}

Before we can run ACS Engine, we need to replace the variables in this file with their actual values at deploy time. To do this, we use what is called a “null resource” resource provider (basically a resource that is not connected to anything) to then allow us to run the “local-exec” provisioner, which us just running any arbitrary command as part of your deployment. We use this a few times in this deployment.

In this step, we take the configuration file above, replace the variables with values and save it to a new file.

resource "null_resource" "render_acs_engine_config" {
  provisioner "local-exec" {
    command = "echo '${data.template_file.acs_engine_config.rendered}' > ${var.acs_engine_config_file_rendered}"
  }

  depends_on = ["data.template_file.acs_engine_config"]
}

Now that we have a renderer configuration file, we can use the local-exec command again to run the “acs-engine generate” command to generate our ARM templates.


resource "null_resource" "run_acs_engine" {
  provisioner "local-exec" {
    command = "acs-engine generate ${var.acs_engine_config_file_rendered}"
  }

  depends_on = ["null_resource.render_acs_engine_config"]
}

This command creates the files in the “output<dns prefix variable>” folder.

###Deploy Cluster

We now have our ARM templates generated and ready to deploy. Initially, I had planned to do the obvious here and use the Terraform ARM template resource to deploy them. Unfortunately, there seems to be an issue with this at the moment, as when you feed it the parameter file generated by ACS-Engine the deployment errors. I’ve opened an issue to look at this. In the meantime, we instead use “local-exec” again and run the Azure CLI to deploy.


resource "null_resource" "deploy_acs" {
  provisioner "local-exec" {
    command = "az group deployment create --name ${var.dns_prefix} --resource-group ${var.rgName} --template-file ./$(find _output -name 'azuredeploy.json') --parameters @./$(find _output -name 'azuredeploy.parameters.json')"
  }

  depends_on = ["null_resource.run_acs_engine"]
}

This resource runs your ARM template, and after around 15-20 minutes your cluster should be deployed and running.

###Kubeconfig

We now have our cluster deployed; the next few steps carry out actions on the cluster and so are going to need to be able to authenticate to the cluster, which was a bit of a problem with Terraform. Things like the Helm provider we use to deploy our resource to Kubernetes need to be passed a Kubectl file, which is only generated after we deploy the cluster. Unfortunately, Terraform does not handle situations like this well; it expects all the resources to be available at deploy time. We could get around this by splitting this into two different deployments, one for the cluster and one for the rest, but I wanted to do everything in one go.

To work around this, we need to do a couple of non-optimal things. We’re going to use the “local_file” resource to create a copy of the Kubeconfig file we need after the deployment, which we then use for all the resources after this. As the local_file resource is expected to create the file at deploy time, then the resources dependant on it won’t fail. This solution is based on the responses to this Github issue

There is another catch here though; the local_file resource expects the source file (the original kubeconfig) to exist. So to resolve this, we do the following:

  1. Create a folder in the same folder as your Terraform files called “_output”.
  2. Inside this folder create another with the name you are using for the DNS prefix variable.
  3. Inside this folder create another called “kubeconfig”.
  4. Inside this folder create a file called “kubeconfig..json” for example kubeconfig.westeurope.json.

Now that we have set this up we can add the local_file resource to the deployment.


resource "local_file" "kubeconfig" {

depends_on = ["null_resource.deploy_acs"]

  content  = "${file("_output/${var.dns_prefix}/kubeconfig/kubeconfig.${replace(lower(var.region)," ","")}.json")}"
  filename = "./terraform.tfstate.helmprovider.kubeconfig"
}

This workaround is far from an ideal approach, and I am looking at better ways to do this, but until Terraform fix their approach to partial configs, there is a limit to what we can do. This change does have an open item on github, but it doesn’t look like it will see changes any time soon.

###Kubernetes Setup

There is a Kubernetes resource from Terraform; unfortunately, it does not support creating Taints or Daemon sets, so to fix that we need to go back again to using “null_resource” and “local-exec”. Here we use this to run Kubectl commands to create the taint on all windows nodes and create the Daemon set.

For the taint we use the functionality in Kubectl to apply this to all nodes that have a node selector with OS of Windows:

resource "null_resource" "taint_windows" {
  provisioner "local-exec" {
    command = "kubectl taint node -l beta.kubernetes.io/os=windows os=windows:NoSchedule --kubeconfig=./terraform.tfstate.helmprovider.kubeconfig"
  }

  depends_on = ["null_resource.deploy_acs"]
}

For the Daemon Set, we are going to reference a YAML file which details the Daemon set configuration. Store this file alongside the Terraform files. We need to make sure it has tolerations that allow the taint we just created.

apiVersion: v1
kind: Namespace
metadata:
  name: win-daemon
---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: win-daemon
  namespace: win-daemon
  labels:
spec:
  selector:
    matchLabels:
      name: win-daemon
  template:
    metadata:
      labels:
        name: win-daemon
    spec:
      tolerations:
        - key: os
          operator: Equal
          value: windows
          effect: NoSchedule
      containers:
      - name: win-daemon
        image: microsoft/iis:windowsservercore-1709
        resources:
          limits:
            memory: 800Mi
          requests:
            cpu: 100m
            memory: 200Mi
      nodeSelector:
        beta.kubernetes.io/os: windows
      terminationGracePeriodSeconds: 30

Then we use local-exec again to run Kubectl and deploy this.

resource "null_resource" "deploy_daemonset" {
  provisioner "local-exec" {
    command = "kubectl create -f WinDaemon.yaml  --kubeconfig=./terraform.tfstate.helmprovider.kubeconfig"
  }

  depends_on = ["null_resource.taint_windows"]
}

###Helm

The final steps are to setup helm and deploy Open Service Broker. We first need to setup the Helm provider and feed it the configuration file created earlier.


provider "helm" {
  kubernetes {
    config_path = "${local_file.kubeconfig.filename}"
  }
}

Next, we need to configure a couple of helm repositories, the service catalogue and OSBA are not in the default repository. We use the helm_repository resource for this.

resource "helm_repository" "svc-cat" {
  name = "svc-cat"
  url  = "https://svc-catalog-charts.storage.googleapis.com"
}

resource "helm_repository" "azure" {
  name = "azure"
  url  = "https://kubernetescharts.blob.core.windows.net/azure"
}

Then we can deploy the Service Catalog and Open Service Broker for Azure Helm packages and provide appropriate values for parameters.

resource "helm_release" "catalog" {
  depends_on = ["helm_repository.svc-cat"]
  name       = "catalog"
  chart      = "svc-cat/catalog"
  namespace  = "catalog"

  set {
    name  = "apiserver.storage.etcd.persistence.enabled"
    value = true
  }
}

resource "helm_release" "osba" {
  depends_on = ["helm_repository.azure"]
  name       = "osba"
  chart      = "azure/open-service-broker-azure"
  namespace  = "osba"

  set {
    name  = "azure.subscriptionId"
    value = "${var.subscription_id}"
  }

  set {
    name  = "azure.tenantId"
    value = "${var.tenant_id}"
  }

  set {
    name  = "azure.clientId"
    value = "${var.deployment_client_ID}"
  }

  set {
    name  = "azure.clientSecret"
    value = "${var.deployment_client_password}"
  }
}

Note that I have seen some issues with deploying these two packages. Service Catalog deploys, and what should happen is the deployment wait for it to be ready. However what I have seen is it move on to running the OSBA package before Service Catalog is ready, this results in the deployment failing. I am looking into ways to fix this, but for now, if this happens, you can just rerun the deployment once Service Catalog is ready to complete the deployment successfully.

Finally, we use Helm again to install the Traefik package. For simplicity, I have truncated the parameters supplied here, as there are a lot, but you can see a list of these on the package page.


resource "helm_release" "traefik" {
  depends_on = ["helm_repository.azure"]
  name       = "traefik"
  chart      = "stable/traefik"
  namespace  = "kube-system"

  set {
    name  = "ssl.enabled"
    value = true
  }

  set {
    name  = "ssl.enforced"
    value = false
  }
}

##Bringing it together

There are many steps here, but all the work you are doing is up front. Once you have your script created and variables set you can run this as many times as required to re-deploy the environment or deploy new environments without much work at all.

There are a good number of workarounds and hacks in this process which is far from ideal. I am hoping that as Terraform evolves and bugs get fixed that we can update this to be much cleaner and less dependant on workaround like local-exec, which is not an idempotent way of deploying resources. Luckily Terraforms ability to store state makes this much less of an issue.

If anyone has any feedback or better ways to do this, please do get in touch either here, or raise an issue on Github. As mentioned, you can find copies of the files created in this article on Github here -https://github.com/sam-cogan/Demos/tree/master/acs-engine-terraform