How To Run Deployment Scripts On A Virtual Network

Deployment scripts are a feature of Bicep (and ARM templates) that allow you run either Azure PowerShell or Azure CLI scripts as part of your infrastructure as code deployment. This provides a means to do things in your deployment that Bicep can’t do natively. You’ll often hear this referred to as an “escape hatch”. If you can write it in Azure PowerShell or CLI (or any other PowerShell module if you download it), then you can do it in your deployment script. Some common use cases for this include:

  • Creating Azure AD users or applications
  • Perform any Azure Data Plane Operation
  • Call an external service or webhook
  • Create certificates or credentials

Deployment scripts are an excellent solution for where Bicep can’t quite do what you need. However, there have been limitations around how networking works for a long time. When you run a deployment script, Azure creates an Azure container instance to run your script and a storage account to store them (unless you specify your own storage account). Up until now, it has not been possible to join this Azure container instance to your virtual networks, which resulted in quite a few limitations:

  • You cannot have the deployment script work with any Azure resources that are locked down to not allow traffic from the internet (Private Endpoints or Service Endpoints), and the IP address allocated to the container is not under your control
  • The storage account that stores the scripts must be publicly accessible from the internet for the Container Instance to talk to it. Whilst this still requires credentials, it makes security folks unhappy
  • You can’t run any deployment scripts that need to talk to on-premises resources that are not exposed to the internet

A GitHub Issue has been open for this problem for a long time, and it has finally been implemented. We can now run deployment scripts on a virtual network, so let’s look at how it works.

Joining A Virtual Network

You need to take some extra steps to run your deployment scripts on a vNet.

  1. You need to use your own storage account; you can’t have the deployment script create it for you
  2. The storage account needs to be attached to the vNet either via Service Endpoints or Private Endpoints and needs to allow trusted Azure Services
  3. You need to create a user-assigned managed identity that has a “Storage File Data Privileged Contributor” role assigned to the storage account

Once this is all setup, when we create the deployment script, we will specify the subnet, storage account and managed identity to use.

You can configure all of this via the Portal if you wish, but if you’re looking to run deployment scripts, you probably want to do all this in Bicep, so let’s see how.

Pre-Requisites

In the first set of code, we will do the three pre-requisite steps listed above to create the storage account and managed identity.

First, I’m assuming you already have a virtual network and subnet you want to join, so we will grab these as Bicep objects. If you need to create the vNet you can do so here.

resource vnet 'Microsoft.Network/virtualNetworks@2021-02-01' existing = {
  name: 'vnet'
}

resource subnet 'Microsoft.Network/virtualNetworks/subnets@2021-02-01' existing = {
  name: 'subnet'
  parent: vnet
}

Next, we are going to create a storage account. Here, we will use service endpoints to join it to the vNet and allow trusted Azure services.

resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
  name: storageAccountName
  location: location
  SKU: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
  properties: {
    networkAcls: {
      bypass: 'AzureServices'
      virtualNetworkRules: [
        {
          id: subnet.id
          action: 'Allow'
          state: 'Succeeded'
        }
      ]
      defaultAction: 'Deny'
    }
  }
}

Finally, we will create a User Assigned Managed Identity and assign it the correct role on the storage account.

resource userAssignedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
  name: userAssignedIdentityName
  location: location
}

resource storageFileDataPrivilegedContributor 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
  name: '69566ab7-960f-475b-8e7c-b3118f30c6bd' // Storage File Data Privileged Contributor
  scope: tenant()
}

resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  scope: storageAccount

  name: guide(storageFileDataPrivilegedContributor.id, userAssignedIdentity.id, storageAccount.id)
  properties: {
    principalId: userAssignedIdentity.properties.principalId
    roleDefinitionId: storageFileDataPrivilegedContributor.id
    principalType: 'ServicePrincipal'
  }
}

Deployment Script

Now we have all the pre-requisites ready, we can create the deployment script. Note the new settings:

  • Identity - set to use our managed identity
  • Storage Account Settings - pointing to our storage account
  • Subnet Ids - pointing out our vnet subnet where we want the script to run
resource dsTest 'Microsoft.Resources/deploymentScripts@2023-08-01' = {
  name: '${prefix}DS'
  location: location
  identity: {
    type: 'userAssigned'
    userAssignedIdentities: {
      '${userAssignedIdentity.id}': {}
    }
  }
  kind: 'AzureCLI'
  properties: {
    forceUpdateTag: utcValue
    azCliVersion: '2.52.0'
    storageAccountSettings: {
      storageAccountName: storageAccountName
    }
    containerSettings: {
      subnetIds: [
        {
          id: vnet::subnet.id
        }
      ]
    }
    scriptContent: 'echo "Hello world!"'
    retentionInterval: 'P1D'
    cleanupPreference: 'OnExpiration'
  }
}

Once we run this, our deployment script container will be joined to our vNet and can access any resources on that vNet.