Azure Container Hosting Demystified - 2022 Edition
Back in the mists of time (otherwise known as 2018), I wrote a post called Azure Container Hosting Demystified which looked at the different container hosting options in Azure, what they are and why you might use them. Four years have passed since I wrote that article. Things have moved on a lot, so it’s time for an updated version.
If you have container workloads you want to host in Azure, then there are many different options for doing that, and it can be pretty confusing trying to pick out which one is right for your project. So today, we will go through all the available options, see what they offer, and why you might want to use them.
If you just want to see a summary of what services are best for which scenarios, jump to here;
Standalone Container Hosting
The first category of hosting options we will look at is where you need to run a single container; you’re not running multiple containers that need to interact with each other or need any complex orchestration. You just want to run a container.
Azure Container Instances
Azure Container Instances (ACI) is a service to run a container as a first-class instance in Azure. There is no orchestrator (that you see at least) to deal with; just tell it what container you want to run, any specific settings or volumes required and then run. You can have a container up and running within a few minutes.
ACI does support some more complex features if you need to take it a step further, such as vNet integration, Azure Files volume mounts and container groups which allow you to run multiple containers. ACI supports Windows containers (but not with vNet Join), and the containers also run with Hyper-V isolation which provides an extra layer of security on a multi-tenant service.
When would you use ACI: ACI’s primary use is where you just need to run a container and don’t need much beyond that. ACI can also be a great resource if you need to run many instances of the same container or scale up the number of containers dynamically and only pay for what you use. ACI is an excellent tool for job-based workloads that run for a limited time or on-demand services.
When would you not use ACI: ACI is not an orchestrator, so if you need more complex orchestration, private container to container networking or greater control of the underlying hardware, ACI is not the right choice. ACI also has limits on size (4 cores, 16GB RAM), so if you need larger containers, you will also need to look at other services. vNet integration with ACI is supported but does have quite a few limitations.
Because ACI instances are ephemeral, they need to download the container image each time they are run. If you have large container images, this can increase start times and may be a problem. If this is the case, you may want to consider one of the other options where you can cache images.
Finally, ACI pricing is slightly more expensive than running a VM of the same size, so if you are planning on running your ACI 24/7, then it may not be the most economical option.
Azure Container Instances | Microsoft Azure
Azure App Service
If you are running a Web Application or API inside a container, using Azure App Service to host it might be a good way forward. App service supports running both Linux and Windows containers and vNet integration, private registries, CI/CD integration, and most of the other features offered by App Service. Using App Service does assume that your application will be exposed to the outside world on ports 80/443 as with the standard app service. Workloads can be scaled dynamically using the existing App Service scaling functionality to create more instances.
App Service for Containers is also pushing slightly into the orchestration space, where it can now support running multiple containers using a Docker Compose file. The multi-container option is currently in preview.
When would you use App Service: App service container hosting is focused on supporting web apps or APIs. If your workload fits this scenario, and you don’t need a more complex orchestration service, then App Service can be a great option. It is a simple PaaS service that removes much of the management overhead whilst offering many features.
When would you not use App Service: The obvious answer is if your workloads do not fit the web app/API scenario. App service does not allow exposing your service on other ports or traffic other than HTTP(S). If you need more complex orchestration or are not happy with the Docker Compose multi-container option, you would also want to look elsewhere. Whilst App Service does support automatic scaling, it uses the built-in abilities of App Service to respond to increases in load and so is not suitable for large scale bursting. Finally, whilst the basic and standard plans support Linux containers, you need the Premium V3 plan to run Windows containers, which can be expensive.
App Service - Web App for Containers | Microsoft Azure
Azure Functions
Azure functions support running Linux containers when the function is triggered. Generally, using containers would be secondary here to the choice of using a serverless approach to running your application. Assuming your application works in a serverless environment, then you may want to use containers as an easier way to build and deploy your workloads rather than choosing functions as a container hosting environment directly.
Using custom containers with Functions requires the premium or consumption SKU and supports all the service features such as triggers, bindings, vNet integration and so on.
When would you use Azure Functions: When you have a function-based workload and using containers will help develop and deploy this workload.
When would you not use Azure Functions: Any scenario that does not fit into the serverless approach. Custom containers also require the more expensive premium, or always-on consumption, tiers which increase the cost.
Create Azure Functions on Linux using a custom image | Microsoft Docs
Azure Batch
Azure Batch is Azure’s platform for running grid compute based workloads on top of VM Scale Sets. Classically these workloads would run directly on the scale set nodes, but it is also possible to run these workloads in containers on these nodes. Azure Batch focuses on running specific jobs rather than hosting long-running containers. Given this, this is a similar scenario to functions, where you are likely to have chosen the platform to meet your requirements for job processing rather than as a container hosting platform. Using containers with Batch is a way to make your development and deployment process more manageable, rather than being the reason you use the platform.
Azure Batch supports all the standard features of the service when using containers. It also offers a pre-fetch option to download container images before running, to reduce node start-up time.
When would you use Azure Batch: You have a job-based batch workload that needs the power of Azure Batch, and you want to use containers to improve your development and deployment process.
When would you not use Azure Batch: Any workload that does not fit the job-based approach of Azure Batch.
Container workloads - Azure Batch | Microsoft Docs
Standalone VM
It is still possible to create an Azure VM and install Docker, or any other container run time, on it and run your containers in that VM. This will work, but you lose all the benefits of using a managed service and need to deal with all the management, configuration and maintenance of this yourself. Generally, I would advise against this approach.
For dev scenarios where you just need to run a container quickly and tear it down again, or maybe run container build jobs (also consider ACR tasks), this could be of use.
When would you use a standalone VM: The only scenario where this would be a reasonable approach is where you need complete control of the underlying container run time or need to do some customisation that is not supported by any of the PaaS services. Another example of this might be a build agent, where you just need the container run time to run your build process.
When would you not use a standalone VM: Most of the time! Generally, the PaaS services offer better options for nearly every scenario.
Orchestrators
Orchestrators provide more feature-rich but complex environments for hosting multiple containers. If you are running a micro-services type application, using multiple containers with inter-container networking, or want to use some of the more advanced features, one of the following options might fit what you need.
Azure Kubernetes Service (AKS)
The fully-fledged Kubernetes implementation in Azure. AKS provides you with a full implementation of Kubernetes running in Azure in a slightly odd PaaS/IaaS Hybrid approach. The Kubernetes control plane is completely PaaS and managed by Microsoft for you. However, the worker nodes are deployed as VM scale sets that you need to manage (mainly patching). Because scale sets are used, you gain the ability to customise the underlying hardware that runs your workloads.
AKS offers the full power of Kubernetes, so if you need complex orchestration, want to run any of the thousands of third-party Kubernetes based applications, create custom resource definitions, or use the advanced features of Kubernetes then this is the service for you. That said, I would recommend that you choose whether or not your application needs Kubernetes first, rather than making your application fit into Kubernetes because you want to use AKS.
AKS is a powerful solution for hosting containers, which brings lots of complexity. You will need to understand Kubernetes and how to both build and deploy applications to this service. That said, Microsoft is looking to reduce this complexity by introducing plugins/addons for popular services such as Pod Identity, DAPR and Flux. It is also possible to hook AKS up to ACI to support more bursting type scenarios with dynamic container creation.
When would you use AKS: If your application needs the power of Kubernetes, requires the ability to run on VM scale set sizes defined by you or needs to scale to substantial amounts of workers. Support for CRDs and fully customising the services running in your cluster is something only supported by AKS.
When would you not use AKS: If you’re just looking to run 1-2 containers, then AKS is overkill and will often add more complexity. In this scenario, you are better off looking at one of the other fully PaaS options. Even if you have more complex micro service type requirements, it would be worth considering Azure Container Apps (see below) to see if a more straightforward orchestrator approach would work. AKS pricing is based on the cost of the VMs used in the scale set, so a minimum amount of VM cost will need to be born, regardless of how small your workload is, which may make it unfeasible for small scenarios.
Managed Kubernetes Service (AKS) | Microsoft Azure
Azure Container Apps (ACA)
ACA is a new (but now GA) offering that aims to offer some of the benefits of AKS but with less complexity. ACA is an orchestrator (and under the hood, it is Kubernetes) but is a fully managed PaaS service with no requirement for the user to manage updates, patching etc. ACA does not offer the same functionality as AKS and is more of a cut-down orchestrator, so it is critical to understand whether your application can run using the services it provides. For example, if you need Custom Resource Definitions or custom Ingress controllers, ACA won’t work for you. However, if you’re running a complex micro-service based on DAPR, it may work for what you need.
ACA comes with several services built-in, such as Envoy for Ingress, KEDA for scaling, DAPR, Blue/Green deployments, Secret Storage and Log Analytics integration. This removes a lot of complexity if you are happy to use these services.
ACA is still a relatively new service, so features are still being added. It is important to understand what it can and cannot currently offer and whether this will work for your app. See more details here.
When would you use ACA: When you need some of the functionality of an orchestrator, and this fits within the services offered by ACA, then this service is a great solution that will remove a lot of the management and maintenance complexity of AKS. ACA also provides significant benefits for applications that need to scale dynamically, with support for HTTP and KEDA scaling and scaling to zero. ACA costs are per container rather than VM based, so combining this with scale to zero can keep costs down for smaller workloads.
When would you not use ACA: If you need the functionality of Kubernetes that is not exposed in ACA, or you need to use different services than the ones built into the platform. ACA makes a lot of choices about how your application should run and the services it should use. If this works for you, then great, but you may want to look at AKS if it does not. On the other hand, for simple workloads with 1-2 containers and little requirement for additional services, then ACA is still likely to be overkill.
Azure Container Apps | Microsoft Azure
Azure Red Hat OpenShift (ARO)
OpenShift is a product from Red Hat that provides a layer on top of Kubernetes to provide additional management services and functionality, designed to make it easier to run and develop against a Kubernetes and Docker centric environment. Azure Red Hat OpenShift is a service offering OpenShift running in Azure.
This is not a service I have yet had much experience with, so it is difficult currently to offer an opinion on how easy to use this is or the benefits it brings. The deployment of ARO is quite different to AKS, in that both master and worker nodes are deployed into the vNet and managed by the user through the OpenShift portal. These are deployed on VMs (rather than scale sets).
When would you use ARO: The benefit of ARO over AKS is the additional management and integration services provided by ARO on top of Kubernetes and so the choice to use this service would be around whether these additional features provide a benefit. As with ACA, some choices on how the service is set up and managed will be made by ARO and so you will need to make sure that your approach fits within these.
When would you not use ARO: If you don’t need the additional services that ARO brings to Kubernetes, or your container development process then you would be best sticking to using AKS. ARO does have fairly significant requirements in terms of hardware as well as increasing cost (as you are paying for the master nodes and the ARO licences).
As mentioned, I have limited experience with this service and so it is difficult to list all the pros and cons.
Azure Red Hat OpenShift – Kubernetes PaaS | Microsoft Azure
Service Fabric
Service Fabric is Microsoft’s homegrown orchestrator and powers many MS services. In my view, Service Fabric has struggled to stay relevant as Kubernetes has taken over the world. Whilst it is still a running service, I would generally not recommend people look to start new projects with this unless you need some specific functionality that cannot be found elsewhere.
One of Service Fabrics’ big selling points was the Actor framework built into the platform, but this can now be done with AKS with the introduction of DAPR.
When would you use Service Fabric: If you need some specific functionality only available in Service Fabric, or you have a project that is already built for this platform.
When would you not use Service Fabric: For most scenarios, Kubernetes (AKS, ACA) would be a better choice.
Overview of Azure Service Fabric - Azure Service Fabric | Microsoft Docs
Deprecated Services
Since the last version of this article, several services have been deprecated, these should not be used, and any existing workloads should be moved to other services.
Azure Container Service
Microsoft’s bet-hedging service supported multiple container orchestrators. This was deprecated in 2020 and replaced with AKS.
AKS Engine
AKS engine provided a way to generate scripts to build your own version of ACS/AKS on top of Azure. This was deprecated in favour of AKS for Azure customers, although it is still supported on Azure Stack until AKS goes GA on that platform.
Service Fabric Mesh
An attempt to create a PaaS version of service Mesh. This was retired in 2021 in favour of AKS.
Summary
Since the last version of this article, we have seen new services introduced, old ones retired, and existing services gain new features and move into GA. Azure now has a variety of container hosting options that meet most types of workloads. The table below summarises the services you may want to consider depending on your workload type.
Use Case | Container Offering |
---|---|
Bursting Workloads | ACI, ACA, Azure Batch |
On-Demand Workloads | ACI, ACA, Azure Functions |
Dev/Test/PoC Workloads | ACI, Standalone VM |
24/7 Production Workloads | AKS, ACA, Azure Web App |
Web Hosting | Azure Web App, AKS, ACA |
Production Orchestration | AKS, ACA, ARO |
Heavily Customised Orchestration | AKS, ARO |
I’m sure over the next few years, we will see more services coming along and improvements to the existing ones that need an update to this article again.