In this series of posts, we've been looking at the process we went through to migrate APIs that support one of our internal applications from Azure Functions to Azure Container Apps.
Part one covers the background to the project and parts two, three and four go into details about the changes that were needed to get the code running on ACA. Part five looks at what was needed to migrate our dev environment to the North Europe region after we found that Azure Container Apps wasn't available in UK South.
This final part looks back over the process and talks about what we learned along the way.
What did we achieve?
I explained our primary motivation for this change back in Part one: to determine whether moving to Azure Container Apps would eliminate the cold start issues we've had with Azure Functions running under the consumption plan, but at a lower price point than switching to a Premium Functions plan.
So, did it work?
In short, yes. Paying the idle pricing for the main API hosted in ACA is much cheaper than the cost of a Premium Function plan, and it does eliminate the cold start problems.
Would this always be the case? Probably not; it should be noted that this is for an application with a single API that needs to be always on. If you were looking at an application with multiple APIs, the cost benefit can be lost relatively quickly. This is because a Premium Functions plan can host multiple functions, whereas with ACA the cost is per Container App. It also depends on the resources required for your container apps; in our use case, we required relatively small amounts of CPU and memory for our containers, but if you need more the price can ratchet up quickly.
So the answer to "should I use Azure Container Apps rather than Azure Functions" is, as always: "it depends...". You should absolutely consider it though!
Would we use Azure Container Apps again?
Absolutely. Although it's taken 5 blog posts to get here, the reality is that it was straightforward to move our existing app to ACA. For the kind of application we have here, with a small number of APIs, hosting in ACA is considerably simpler than other container hosting solutions - I'm looking at you, Kubernetes!
It's still very early days for ACA, so I'm excited to see where things go next. My biggest gripe right now is that there's no longer term roadmap to tell us when specific features are likely to land. On the positive side, there is a shorter term view which shows you what's likely to land in the near future - you can see that here. There's also a Discord in which team members are very active and you can follow the team on Twitter @AzContainerApp.
And obviously, now we've containerised our application, we've opened the door for other container-based hosting options should we need them, including those on other platforms - such as Amazon Elastic Container Services.
Would we use Azure Functions again?
Absolutely yes to this too! As I mentioned back in part 1, there are plenty of scenarios where Azure Functions are a great fit. Processing queues in the background, timer triggered functions, and basic workflows using Durable Functions are all great use cases for Azure Functions. And, if you have multiple functions as part of your application then the Premium functions plan may prove to be a simple and cost effective way to host them.
Which is better?
Well... it depends!
The first thing to remember is that fundamentally, ACA and Azure Functions are quite different hosting platforms. Because Azure Container Apps is a container hosting platform it can essentially host anything you can put in a container. So is there ever a need to use something like Azure Functions again?
In short, yes. Although you can do anything in ACA that you can do in Azure Functions (especially since you can choose to host the Functions runtime in a container and host it in ACA!), the Functions platform has some areas it really shines. Lets have a look at a few use cases and compare the two.
|Azure Container Apps
|Single HTTP API
|Potentially good fit, depending on your requirements around response times and your usage patterns; cold start may be an issue.
Azure Container Instances
|Multiple HTTP APIs
|Potential fit, depending on resource requirements and number of APIs. Per-container pricing may result in higher costs than other options.
|Potential fit, especially if you can use a Premium Plan or App Service Plan. Good fit if you don't care about cold start times.
|Azure Kubernetes Service
|Potential fit. May work out cheaper than standard App Service depending on resource requirements.
Azure Container Instances
|Good fit. Automatic scale out and the number of free requests make this a cost effect way for low to medium numbers of messages.
|App Service (via a WebJob)
|Reasonable fit (using Dapr Cron binding or similar). Also, the team is working on support for Jobs.
|Good fit, using Timer triggered functions.
|App Service (via a WebJob)
|Average fit; Dapr can be used to host Logic Apps workflows in a container app, but this might prove excessively complex.
|Good fit, using Durable Functions
There are a number of other differentiators that might push you down one route or another. Here are a few of the more obvious ones:
Do you need vNet integration?
At the moment, vNet integration is not available in an Azure Functions Consumption plan. If you need this, you'll need a Premium or Dedicated plan, to use an App Service Environment, or to self host. See https://learn.microsoft.com/en-us/azure/azure-functions/functions-networking-options for more information.
Azure Container apps run in a vNet by default; when you create your Container Apps Environment, you can add it into an existing vNet or let it create one for you.
What are your scalability requirements?
In an Azure Functions consumption plan, and if you're using Windows, the platform will provision up to 200 instances of your function based on load. If you're using Linux, you only get a maximum of 100.
On the premium plan you can have fewer - up to 100 instances regardless of platform, but you have the option of choosing a plan that gives you more resource per instance. See https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale for more information
Within those instances, there are also restrictions on CPU and memory. Under an Azure Functions consumption plan, you're limited to 1.5GB of memory and 100 Azure Compute Units (ACUs) per instance. When you move up to the paid plans, these limits change drastically - with the Premium and Dedicated plans, you can provision up to 14GB of RAM and 840 ACUs per instance, although it won't be cheap!
On the Container Apps side, control over scaling is a little more fine grained. You can specify scale triggers based on network traffic, CPU or memory, or the number of messages being received from a sources such as Azure Service Bus. You can also set limits on scale, with a maximum 30 instances per container app.
CPU and memory options are also a little more restrictive than the paid-for Azure Functions plans. Each container can specify how many vCPUs are allocated in 0.5 vCPU increments up to 2 vCPUs. Similarly, you can specify up to 4GB of memory per container.
The Container Apps team are currently working on increasing these quotas and I'd expect them to continue to change as ACA matures.
It's also worth noting that the resources you specify for a container app are shared between all containers running in that app - so if you're using Dapr, the resources will be shared by the Dapr sidecar.
Ultimately, all of this means you'll need to pay attention to your non-functional requirements around scalability before making a decision on which platform to adopt.
Note: The numbers listed above were taken from the Microsoft Learn website in December 2022. When making decisions around scalability, ensure you're using the latest numbers as they are all subject to change.
Does your architecture require sticky sessions when your app scales out?
Sticky sessions, also referred to as Client Affinity, is a feature of many application gateways/load balancers (including the one that's built into App Services) that ensures users are always served by the same instance of a scaled out app. If your app is currently scaled out to 4 instances and a new user comes along, their first request will be directed to a specific instance and then Client Affinity will ensure that all of their future requests are also directed to that instance.
Whilst Azure Functions can be forced to support Client Affinity, its programming model is stateless so it's not really intended to be used in that way. In fact, a few years back there was a bug that was causing functions apps to be created with session affinity enabled by default; the response to this was to prevent that happening and, at the same time, disable Client Affinity on all existing function apps. (See the issue on GitHub for more detail.)
Azure Container Apps doesn't support sticky sessions yet but it's on the roadmap. If you think your application may require this feature and you can afford to wait, Container Apps will be the better choice. If you can't wait then there are other services on the platform that support it now, for example App Services or Azure Kubernetes Services.
Do you want to use Dapr?
At the moment, Dapr is only available to Azure Functions if you're self hosting. However, Dapr is a first class citizen in the world of Container Apps, with support built into the platform. It's also fully supported in AKS so if you think you're app might grow to the point where you need to make the step up to Kubernetes then it's a safe bet.
Do you need long running tasks?
For an Azure Function running under a consumption plan, execution time for a single function is capped at 10 minutes. If you upgrade to a Premium or Dedicated plan, you can remove this constraint.
Since you can run (almost) whatever you like inside your container, there are no restrictions to long running tasks in Azure Container Apps. The "almost" in that sentence refers to the two limitations of containers in ACA. The first is that your containers have to be Linux/AMD64 based - nothing Windows or ARM based is allowed. The second is that you're not allowed to run privileged containers. A privileged container is one which has the privileges of
root on the host system and so can access host resources that would not normally be available. This capability only exists to allow a small number of special use-cases with the main example being running Docker within Docker, and isn't something you would normally need.
Having migrated this application to container-based hosting, there are a number of directions we'd like to explore. The first of these, which I'll be blogging about over the coming weeks, is investigating what Dapr can add to the mix; I've already mentioned authentication and the Dapr CRON, but Dapr is continually evolving too so there's certain to be more.
Thanks for reading!
If you've made it this far, thanks for reading this blog series - I hope you've enjoyed it. If you've got any questions or would like to discuss anything I've talked about, please feel free to leave a comment below or reach out to me on Twitter @jon_george1.