Skip to content
Jonathan George By Jonathan George Software Engineer IV
Bye bye Azure Functions, Hello Azure Container Apps: Build and deployment pipelines, and our first big problem

As I discussed in part one of this series, we've decided to take an internal application whose APIs are currently running as Azure Functions and move them to be hosted in Azure Container Apps. Part one explains the reasons for this, which are mainly to do with cold start issues and the cost of addressing those on the existing platform. Part two covers our initial step of migrating our existing Azure Functions apps to run as ASP.NET Core applications.

In this part, we'll be looking at how we modified out build and release pipelines to deploy to Azure Container Apps, and talking about the first major issue we encountered.

The starting point

We use Azure DevOps for our internal projects, as well as for build and release/deployment of our various open source projects. As a result, we already had three pipelines set up for our project.

CI build (back end)

Builds the code in Release mode and executes the tests. Assuming success, uses dotnet publish for each of the Functions apps and then packages the results into zip files. It also packages the Admin CLI tool and the deployment scripts, all of which are then published as build artefacts. This means that every build results in 5 artefacts, as shown in this image:

Original build artefacts view, showing the 5 artefacts that the CI build produced prior to the migration

Release (infrastructure and back end)

This is a multi-stage pipeline which uses a combination of Powershell and Bicep templates to deploy to our dev and production environments.

Release pipeline showing the three stages of the release

When it's run, we can point it at any branch and it will pull the artefacts from the latest run of the CI build against that branch (it's also set up to allow us to specify a specific CI build to pull artefacts from). This allows us to easily deploy artefacts from branches to our dev environment for testing purposes. The first step downloads the published artefacts from the specified CI build and then they are used to deploy both infrastructure and code to the target environment. As can be seen from the above diagram, deploying to the prod environment requires specific approval before it can run.

This snapshotting technique ensures that both the artefacts used to do the deployment and the code that's deployed will always be the same for both dev and production environments. And obviously, should we introduce additional environments at a later date (e.g. by adding a UAT environment, or introducing deployment rings), these could also be added to the pipeline.

CI build (front end)

We've got a separate build and release process for our Angular front end. Our experience working with Single Page Applications, regardless of framework, is that it's useful to do this. A front end build and release will likely be far quicker than a back end one so splitting it out makes it easier for simple UI changes to be iterated on and pushed to production far faster than if a full build was required every time.

Similarly to the back end CI build, the front end CI build publishes a build artefact containing the transpiled and bundled code, ready for deployment.

Release (front end)

As with the infrastructure and back end release, this pipeline takes the build artefact published by the specified CI build and then pushes the files to an Azure Static Web App in the target environment. There's an ADO task for this which makes things really simple.

Due to this setup making it easy to roll back to previous versions if needed, and the fact that the project is currently internal use only, we only have two environments to deploy to - Dev and Production.

What do we need to change?

So, with our starting point established, how will things need to change in order to deploy to Azure Container Apps?

In order to get our applications deployed, we'll need the following steps:

  1. Build and test the code
  2. Create a container image for each API
  3. Push the container images to a container registry
  4. Deploy the images to the appropriate Azure Container App instances

We'll also need to change our infrastructure deployment to:

  1. Remove the existing Azure Function Apps
  2. Deploy an Azure Container App Environment (which also requires a Log Analytics workspace)
  3. Deploy three Azure Container Apps

(In fact, the final step on each of those lists is the same thing, as we specify the container image to use as part of the Bicep resource for the container app.)

I'm not going to cover the infrastructure changes in any detail here, as they are relatively straightforward.

There were two questions we had to answer at this stage. The first was determining which part of our build and release process should be responsible for each of the steps in the first list. Specifically:

  • which pipeline will be responsible for creating the container images?
  • which pipeline will be responsible for pushing the container images to the image registry?

The second was how we'd actually create those container images.

What happens where?

It's pretty obvious that the deployment step belongs in the release pipeline. But where best to put the other two?

We identified the following options:

Option CI Pipeline Release pipeline
1 Publish compiled code as build artefacts Use build artefacts to create container images, pushes them to the registry, deploys them to ACA
2 Creates container images and publishes them as build artefacts Uses build artefacts to push images to the registry and then deploy them to ACA
3 Creates container images and pushes them to the registry Deploys the images to ACA

Let's have a look at the advantages and disadvantages of each option.

Option 1: The Release pipeline is responsible for image creation, pushing to the registry and deploying


  • The CI pipeline can remain as is
  • We will only create new images and push them to the registry when we do a release.


  • We will only know if there is a problem creating the images when we do a release

Option 2: The CI pipeline is responsible for creating the images, the Release pipline pushes them to the registry and releases them


  • Any issues with image creation will be discovered during the CI build
  • We will only push images to the registry when we do a release.


  • Additional steps are required over the other options to save the images as archives for publishing as build artefacts, and then load them back prior to pushing to the registry
  • Additional storage required for build artefacts (currently around 265Mb per image, so around 800Mb per build run)

Option 3: The CI pipeline is responsible for creation images and pushing them to the registry, the Release pipeline does the release


  • Any issues with image creation will be discovered during the CI build


  • CI build will potentially take longer
  • Requires change to both CI and Release pipelines
  • Will need a means to prevent pushing an image on every build to avoid storing an excessive number of unneeded images in the container repository.

Final decision

As we discussed this, one thing that stood out was that we already have something extremely similar to option 3 for our open-source Corvus projects.

These projects all have CI builds which build and test the code, then package it and push it to NuGet. This is conceptually equivalent to building the container and pushing it to a container registry.

Additionally, these builds only do the final step of pushing to NuGet if one of two things are true:

  • The pipeline is being run against a tag (we tag our versions, so the assumption is that if we're building against a tag it's because we want the commit associated with that tag to consitute a release.)
  • A specific pipeline variable, ForcePublish has been set to true.

This means that we automatically get releases for tags, and can request releases for active branches where we want to test the resulting NuGet package elsewhere prior to finalising and merging the branch.

We decided to adopt the same approach for our new use case. So our CI build now builds the container images on each run, but only pushes them to the registry if the target commit is either a tag, or we've explicitly requested them to be pushed. It then publishes a file containing the tag that's been assigned to the container images so that they can be identified as part of the release build.

Container image creation

The second decision we had to make was how to actually create the container images. As with the previous decision, we had three choices.

  1. Build and dotnet publish the code on the agent, then copy the resulting files into the ASP.NET Core base image to produce our container image.
  2. Containerise the build and dotnet publish step using the .NET SDK base image, then copy the resulting files into ASP.NET Core base image to produce our container image.
  3. Containerise the build and dotnet publish step, the publish the resulting image.

Option 3 is not ideal as we'd end up publishing more than we need to. Additionally, we'd end up publishing an image based on the .NET SDK base image rather the ASP.NET Core base image. The latter is optimised for ASP.NET hosting, so this route would mean we'd need to reproduce those optimisations ourselves.

Option 2 is what you get by default if you add Docker support to a project using Visual Studio. However we decided to go with option 1 as it meant the fewest changes to our existing process. Had we gone with option 2, we'd have needed to do additional work to extract test output and coverage reporting from the build container. It would also have added time to the build process as it would require downloading two base images for each build rather than one (we use hosted build agents so we don't have the luxury of those images being available from one build to the next).

So our image creation process now replaces the old CI build step where we zipped up the output for publishing as a build artefact. Instead, each of the APIs now has a Dockerfile that looks like this:

ARG Configuration=Release
COPY ./bin/${Configuration}/net6.0/publish .
ENTRYPOINT ["dotnet", "CareerCanvas.Host.dll"]

As you can see, we've got a couple of lines to cater for the ability to override the build configuration but otherwise the file is pretty simple.

And then we hit a problem...

It was during the process of updating the build and deployment scripts that we hit our first problem. Specifically, once we'd added Bicep modules to deploy the Azure Container Apps Environment, we found that ACA wasn't actually available in the region our app was currently deployed into, UK South.

At the time, ACA was available in North and West Europe, but not either of the UK regions. (At the time of writing - only a few weeks after we went through this migration process - ACA does now support UK South, but we didn't have any visibility of the timeline).

This left us with a few choices:

  • Abandon the migration effort completely, and come back to it once ACA was available in UK South.
  • Deploy the containerised APIs to North Europe and leave the rest of the resources where they were.
  • Move the existing resources to North Europe so everything could be deployed into the same region.

If not for the fact that this process was as much about learning as anything else, we would likely have decided to abandon the migration effort until ACA became available in UK South. There's no easy way to move resources like storage accounts between environments; you need to back them up, delete them, redeploy in the new region and then restore them.

We definitely didn't want to have the code and data split across two regions as this would have a significant impact on performance.

But we were keen to push on with our migration, so we decided that once we'd got the APIs up and running in ACA we'd move the remainder of the resources to North Europe as well. We'll cover that process in part 5 of this series.

And then we hit another problem...

At this point we were feeling quite good about things. We'd got our build and deployment working, the APIs were up and running in ACA and we'd seen the site running from Static Web Apps, talking to the new APIs.

However, we quickly encountered another issue to do with CORS support and Authentication. In the next part of the series, we'll go into the details of that problem and how we dealt with it.

Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

Jonathan George

Software Engineer IV

Jonathan George

Jon is an experienced project lead and architect who has spent nearly 20 years delivering industry-leading solutions for clients across multiple industries including oil and gas, retail, financial services and healthcare. At endjin, he helps clients take advantage of the huge opportunities presented by cloud technologies to better understand and grow their businesses.