There’s a big change coming to Azure VM networking in September 2025. Yes, that’s a way off, but it will pay to plan for this now.
VM networking in Azure has differed from AWS for a long time, in that Azure VMs are automatically able to make outbound connections to the internet, with no configuration required from the user. AWS on the other hand, takes the opposite approach. By default, AWS VMs have no internet access, and you have to explicitly configure it if you need it, through something like a NAT Gateway.
If you are using Azure DevOps to deploy resources to Azure then you probably know the pain of using service principles for your service connection. Using a service principle involves storing the credentials for this in Azure DevOps. Assuming you’re following good security principles and having an expiry on the service principle secret you’re probably going to forget to renew it at least once, and your deployments will grind to a halt.
Whilst we’ve been using Bicep files for a while now to create infrastructure as code, we’ve been stuck using the same JSON parameter files we had been using with ARM templates to be able to pass in parameters to our Bicep code. Well, no more! We now have a Bicep parameter file, the .bicepparam file, which allows us to use Bicep syntax and offers several benefits.
All of that said, the JSON parameter files still work, so why should you switch?
I run a small server at home for storing documents, photos, and media and running some applications that are used at home. This server is running Unraid, which is a Linux Server operating system that allows you to get a level of disk resilience without the need for a full-blown RAID array and multiple redundant disks. It’s a great OS for a home server, and the way it provides resilience using a parity drive has saved me more than once when I had a hardware failure.
One of the benefits of using a multi-cloud Infrastructure as Code tool like Pulumi or Terraform is that you can easily transition between layers in different providers. For example, you can deploy a Kubernetes cluster in Azure with AKS, and then using the same IaC deploy pods and services into that cluster. To do this, the IaC tool needs to retrieve some credentials in the form of a Kubeconfig file, from the cluster it has just deployed.
Pulumi recently announced the beta release of their Azure Native provider. This provider allows the creation of Azure infrastructure using Pulumi, with the native part meaning that this is auto-generated from the Azure APIs, rather than hand-crafted like some of the Terraform-based providers. Upgrading to v2 should be a fairly easy process, there are no huge tasks to undertake, but there are some things to be aware of, which we will look at in this article.
Pulumi recently announced support for using OIDC to authenticate to Azure when deploying your resources. This is separate from the Pulumi service supporting this, which it has done for a while, and means you can use OIDC from any supported deployment mechanism, you don’t have to use the Pulumi service to get this. Let’s take a look at what that means and how to do it.
Why use OIDC? When you deploy resources to Azure, you need to provide some sort of credential to do the deployment.
Azure Container Apps is the Microsoft solution for PaaS Kubernetes, abstracting away the Kubernetes layer to allow the user to focus on delivering their container-based applications. It’s a relatively new service, released last year, and has a few missing features or areas where things could be added. Over the last couple of months, there have been several announcements and updates for ACA that fix some of these, culminating in last week’s Build conference where we saw quite a few announcements.
I’ve been running a home Kubernetes lab for a while now, mainly to help with studying the CKA/CKS exams which I took a couple of years ago (and that I need to make sure to renew this year!). This lab has been running on a collection of Raspberry Pi 4 boards, which has worked fine however it does have some limitations. In particular, I can only run ARM 64 images, which does limit some of what I can run.
Resource groups generally aren’t something you think too much off when deploying your Azure resources. They’re essentially a folder containing your resources; once you create one, you don’t do much with it again. However, there is one concept you should think about - the location of your resource group compared to your resources and how this might impact a disaster recovery or failover scenario.
One of the purposes of a resource group is to store meta-data about the resources deployed into them, so they contain details about how that resource is configured, its current state etc.