Don’t forget to check out the other parts in this series:
Initially this might seem like a bit of an odd topic, “Build” in a software development process usually refers to the process of compiling an application and preparing it for distribution (creating MSI files etc.). Whilst we are writing infrastructure as code, there’s no compilation involved, no creating of installers, so what are we doing here?
We’re going to make use of some of the processes available in a build to help us transition from our infrastructure code being in it’s raw form in Git, into a form that is ready for use in a release management tool (release management will be part 4 of this series). We’re also going to leverage the work we did in part 2 creating tests, and automate this test process as part of our pipeline. So our Infrastructure Pipeline “Build” process is going to look like:
- Check our code out of Git
- Run our code tests
- Deploy a temporary test environment
- Run our infrastructure tests
- Tear down the test environment
- Bundle our code ready for release
- Upload our code to a distribution location ready for release
So no compilation, but lot’s of other aspects of “Build”. Once we complete this we are at a stage that whenever we check in some new infrastructure as code it can be automatically tested, both for code correctness and that it deploys what we expect it should, and bundle it up and deliver it to our desired distribution location, all without any interaction from us (assuming our tests pass).
So far we have got away with just utilising Github and VS Code for our work, but for a build process you really will benefit from using a build tool. We are going to utilise Visual Studio Team Services in this article, because it’s readily available in Azure and free for up to 5 users. That said, nothing we do here will be particularly VSTS specific, if you prefer another build tool, feel free to use that. I’m not going to cover the process of creating a VSTS account here, so I will assume you have that setup and you have created a new, blank, project.
Before we can start creating our build tasks we need to add a way for VSTS to communicate with Azure. VSTS is going to be the one talking to Azure to create the test environments and run the tests, so it needs rights in Azure to be able to do this. VSTS achieves this by using a service principal to access Azure. This is essentially a service account for Azure, based on an Azure AD app instance. We need to create this and grant it rights on our subscription.
Creating a service principal can be done a number of ways. One very quick way if you have Docker running on your machine is to use a container created by Ju Stroh which accepts the username and password you want to use for your Service Principal and creates it for you, providing all the details you need for VSTS back (be sure to check the source to make sure you are happy with what this is doing).
docker run -it julienstroheker/add-azure-spn <NameApp> <PasswordApp>
Alternatively if you don’t have access to or don’t want to run a container, you can create it through PowerShell or Azure CLI
Azure CLI – http://blog.jstroheker.com/2016/10/11/SPNAzure/
Once you’ve created the service principal you need to add it to VSTS. Go to your new project then click on the settings cog wheel in the top right. In this page go to the settings tab, click the “New Service Endpoint” menu then go to “Azure resource manager”. In the window that opens click the text that says “use the full version of this end point dialog” to be able to enter the details of the account. The simple window does allow you to have VSTS create the SP for you, but I have had limited success with this and generally do it myself and use the fill window.
Complete this with the details of your SP and click OK.
Creating the Build Definition
We need to create a build definition to contain al the steps above, so in our VSTS project we’ll go to the “Build and Release” tab and click new to create a build definition. VSTS will offer a list of templates for predefined builds, but we are going to start with a blank one, so click the button to create an empty process.
New we have an empty build definition we need to get hold of the code to work with, so the build needs to be configured to pull that from your version control repository that we set up in part 1. VSTS comes with it’s own built in Git repository, so you can use that to store your code, but I’m going to continue to use GitHub for my repo and pull the code form there.
In the build definition there will be one step at the top called “Get Sources”, if we click on that we can then choose our source control provider and fill in the details of which repository and branch we want to build form. If your using a source that isn’t VSTS then you will also need to provide credentials to access this.
Running Pester Tests
Now we have our project checked out, we can start using the files in our process. The first thing we want to do is run our Pester code test to check for things like syntax errors, missing ARM template sections etc.
Now we could just run this using a plain PowerShell task in VSTS, this lets you run any PowerShell you want inside the build. However we can make things a bit easier by instead using a Pester Extension in VSTS. VSTS extensions are pre-built units of code that do certain tasks, in this case a Pester extension will make sure Pester is installed on the build machine, run our tests for us, and export the results so we can consume them later.
There are actually two Pester extensions on the VSTS but we are going to use this one by Peter Groenewegen. The reason we picked this extension is that as well as Pester, it includes the code to automatically connect to Azure using our service principal we created early. This is great, because we need to connect to Azure to run our tests. So we’ll click on the option in the Marketplace to install it then head back to VSTS to set it up.
In our Build Definition we will have a section called “Phase 1” which is empty. If we click the plus icon we will see long list of steps we can add, we’ll search for Pester and we’ll see two tasks; “Pester PowerShell Unit Tests” and “Pester PowerShell deployment tests on Azure” We need to pick the Azure option.
Once we add this we then get a page with some options to complete:
- Display Name – A sensible name for these tests
- Azure Connection Type – Whether Classic or Azure Resource Manager (ARM), we will pick ARM
- Azure RM Subscription – Here we will pick the service principal we created earlier from the drop down
- Test File Patter – This is where you select which test files to run. Unfortunately this extension doesn’t support the use of Pester tags so instead we have to use a naming convention to select tests. I have named all my code tests “something.unit. test.ps1” to differentiate from my infrastructure tests that will run later, so we can just search for *.unit.tests.ps1. This will search for any tests that match that patter and run them.
Here’s the completed test setup:
Once we save that, we now how the first step in our build process. You can if you want now queue a build to test that. Just click the “Queue new build” button, make sure to select “hosted” as the build agent option and click ok. This will start a build and give you a link to click so you can go and watch the process. You will see a log file that shows you the progress of your tests and hopefully everything passes. If you get any failed tests then the build step will fail.
Out next set of tests check that what we expect to get deployed from what is defined in our ARM templates is actually being deployed, so for this we need to actually deploy some resources into Azure to test against. To do this we are going to have VSTS run our ARM templates to create the infrastructure, run the tests and then tear it down again. This is done with 3 build tasks:
First, we deploy the Infrastructure – this uses the Azure Resource Group Deployment task, again from the add task menu.
Inside this task we define a few parameters:
- Display Name – Whatever you want this task to be called
- Azure Subscription – Again pointing to our Service Principal created earlier
- Action – We are creating resources so we use the “Create or update resource group” option
- Resource Group – The name of the resource group you want to deploy to (this does not have to exist, it will create it for you)
- Location – The Azure region you want to deploy to
- Template Location – This is where to look for the template, in this example we will use “linked artificat” which will look inside this project, but later you may want to change this to a URL to look in Azure storage for example
- Template – The actual location of the template file relative to the root of the project, this can use wild cards
- Template Parameters – The path to the parameter file that will be used with the template
- Deployment Mode – As we are going to be deploying this clean every time this can be either mode, I use incremental
- Everything else is left as default
For this example we have all our parameters hard coded in our template parameter, however to be more dynamic you could use VSTS values or parameters instead, and you would then put tokens in your template to get replaced at build time. We will cover this in more detail in the future.
The complete deploy task looks like this:
Once the deployment completes we then want to run our pester tests for testing the infrastructure (see part 2 for more details on this). This is going to use a pester test task that is nearly identical to the one used in the first stage, the only difference here is the search pattern for the test file will now be *.infra.tests.ps1. Our test file, in the interest of being as verbose as possible with our tests, has the resource group and resource names hard coded in it. Again if you want to be more flexible you could look to parameterise these.
The final setup in these tests is to delete the resource group when we are done. This again uses the “Azure Resource Group Deployment” task, but instead of the create or update option we use the “Delete resource group” option instead, this then only requires the name of the resource group to delete. One extra thing we do in this task is to enable the option to run even if the previous task failed or the build was cancelled. This will ensure that the resource group is always cleaned up.
Review Test Results
Now we have done all our tests we want a way to be able to review them when the build finishes. Another benefit of using the market place task for pester is that it automatically outputs the tests results in XML files in NUnit format, so we can use the built in Visual Studio “Publish Test Results” task to pull these in and display the results.
This task only requires the format of the test results and where to find them. The Pester runner saves the test results one level above the actual test files and with a name of Test-<filename>.xml, so we tell the task to look in this location for any files with a pattern of **/TEST-*.xml. Again, we want this task to always run so we set the run even if the previous task failed option, otherwise we wouldn’t get test results for failed tests.
Now when we run a build we get a summary of our test results right in the build status page.
Package and Transport Files
In the last part of my build process I want to take my now fully tested ARM templates and package them up in a zip file and upload them to Azure storage so that I can distribute them to developers and my release management tools. To do this we will undertake 3 more build tasks.
The first thing we want to do is copy the files we want to bundle to VSTS artifact’s staging directory. This is just a folder you can use to prepare files before release. We want to copy them here because we only want to bundle up the ARM template and parameter files, and not everything else in the project. We’ll use the copy files task in VSTS
We’ll provide this with details of the files we want to copy (so anything in the templates or parameters folder) and the place we want to copy to (the artifacts staging directory).
Now we have the files in the right place we’re going to use an archive files task to zip them up.
This just takes the location of the folder to compress, the type of compression to use and the name of the file to create.
Finally, we are going to take this zip and upload it to Azure blob storage. As you might expect, there is a built in task to do this:
This task utilises our Service Principal as well, so we don’t need to provide storage keys to VSTS, we just give it the service principal details, the type of storage to use, the name of the storage account to use and the name of a container to place it in (VSTS will create this container).
Now we have all our steps defined we can kick of a build with the “Save and Queue” button and this should then run through all our steps and what you should hopefully see at the end of it is a completed green build with all the tests passed and artifacts created.
You’ll notice from my build history that it doesn’t;t always run smoothly, I had to make some tweaks to get paths correct and tests results to show, so don’t expect it to run through first time, but it shouldn’t take much time to debug. If you do get failures you can dive into the build logs and see detailed logs for each step which should help you pinpoint the issue.
This is obviously a fairly small scale example, in the real world you ARM templates and your tests will be much more complicated, but that doesn’t mean you can’t follow these same steps to create your build.
All of the code for this example is on Github here, this includes the VSTS build definitions exported as JSON. If you want to try them out on your own VSTS instance just go to the build page and click the “Import” button and point it at this file.
In the last article of this series we are going to take what we have built in this article and feed that into a release management system to automate the deployment of our production environments. See you then!