This week, I was working with Azure functions and had a very frustrating issue that I spent multiple days trying to get to the bottom of. This ended up being a reasonably straightforward issue with name collisions. Still, it wasn’t obvious and took a lot of digging to get to the bottom of, so I wanted to provide some details here in case anyone else hits the same problem.
The Problem I was working with an Azure function that had been running for some time, but I needed to make some changes to the code.
A couple of weeks ago we talked about how Microsoft is deprecating default internet access for VMs](https://samcogan.com/retiring-default-internet-access-for-vms/) in September 2025. This change is being done because it doesn’t meet Microsoft’s “secure by design” approach. However, you don’t need to wait till 2025 if you want to get ahead of the game and secure your outbound traffic due to the introduction of private subnets.
By declaring your subnet as private you disable default outbound access and out of the box any VMs in that subnet will not have access to the internet.
Multitenant applications in what is now called Entra ID, but what I am sure I will continue to call Azure AD (AAD) for some time, are a way for developers to create Azure AD-backed applications that are managed in the developer tenant but allow users to log in and manage users from their tenant. They are a very common way for SaaS developers to manage application authentication whilst leaving the user management to the client.
There’s a big change coming to Azure VM networking in September 2025. Yes, that’s a way off, but it will pay to plan for this now.
VM networking in Azure has differed from AWS for a long time, in that Azure VMs are automatically able to make outbound connections to the internet, with no configuration required from the user. AWS on the other hand, takes the opposite approach. By default, AWS VMs have no internet access, and you have to explicitly configure it if you need it, through something like a NAT Gateway.
If you are using Azure DevOps to deploy resources to Azure then you probably know the pain of using service principles for your service connection. Using a service principle involves storing the credentials for this in Azure DevOps. Assuming you’re following good security principles and having an expiry on the service principle secret you’re probably going to forget to renew it at least once, and your deployments will grind to a halt.
Whilst we’ve been using Bicep files for a while now to create infrastructure as code, we’ve been stuck using the same JSON parameter files we had been using with ARM templates to be able to pass in parameters to our Bicep code. Well, no more! We now have a Bicep parameter file, the .bicepparam file, which allows us to use Bicep syntax and offers several benefits.
All of that said, the JSON parameter files still work, so why should you switch?
I run a small server at home for storing documents, photos, and media and running some applications that are used at home. This server is running Unraid, which is a Linux Server operating system that allows you to get a level of disk resilience without the need for a full-blown RAID array and multiple redundant disks. It’s a great OS for a home server, and the way it provides resilience using a parity drive has saved me more than once when I had a hardware failure.
One of the benefits of using a multi-cloud Infrastructure as Code tool like Pulumi or Terraform is that you can easily transition between layers in different providers. For example, you can deploy a Kubernetes cluster in Azure with AKS, and then using the same IaC deploy pods and services into that cluster. To do this, the IaC tool needs to retrieve some credentials in the form of a Kubeconfig file, from the cluster it has just deployed.
Pulumi recently announced the beta release of their Azure Native provider. This provider allows the creation of Azure infrastructure using Pulumi, with the native part meaning that this is auto-generated from the Azure APIs, rather than hand-crafted like some of the Terraform-based providers. Upgrading to v2 should be a fairly easy process, there are no huge tasks to undertake, but there are some things to be aware of, which we will look at in this article.