Skip to content
James Broome By James Broome Director of Engineering
Azure Analysis Services - how to save money with automatic shutdown

I've written a lot recently about integrating Azure Analysis Services into custom applications, arguing that there are many reasons why you might want to do this.

As a stand-alone, cloud-hosted, PaaS, data analytics platform, it allows for massive amounts of data to be queried for ad-hoc data analysis. But there's nothing stopping you from harnessing that power in your own analytical solutions, and with a variety of support through client SDKs, PowerShell cmdlets and REST APIs, the opportunities for integration are endless.

However, this processing power comes at a price - and whilst "expensive" is a relative term, it's fair to say that Azure Analysis Services is one of the "pricier" services in the Azure platform. In contrast to other services, there's no free tier available, and the cheapest developer tier (recommended for evaluation, development, and test scenarios) still comes in at ~£70 per month.

Whilst that's unlikely to break the bank, if you are building mission critical applications on top of Azure Analysis Services you're expected to use one of the standard tiers, which start at ~£440 per month, which now starts to feel like real money (and that's still only a S0). If you need large amounts of memory (remember, your entire data model is compressed and stored in memory), or you have complex data processing which needs higher QPUs, or your usage requires the use of query scale out (i.e. lots of concurrent users), then the monthly cost jumps into the £1,000s very quickly.

For up to date pricing, in your local currency, take a look at the pricing documentation, or use the pricing calculator for Azure Analysis Services.

Of course, for production systems, this cost can be justified against the value that it's providing. However, as with any other software development environment, it's not just the "prod environment" that you need to consider. When you move away from treating Azure Analysis Services as a standalone BI platform, and start thinking about it as part of a wider analytics solution then you probably need to start thinking about development, integration and test/QA environments too.

Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

Where do the developers push their latest model updates to for testing? Where's the stable, consistent environment that the QA team can use to sign off against business requirements? Where's the demo/pre-prod environment containing anonymised data that the business can validate against? Where can we run load tests to prove that the DAX expressions that worked fine over a few thousand rows of data still perform when there are billions?

Not all environments need to be created equally, but the point is that we've very quickly moved away from modelling the price for a single environment to needing to do this for many environments, with a service that can be relatively expensive already.

Pay only for what you use

As a PaaS service, Azure Analysis Services benefits from the pay-as-you-go pricing model - no upfront costs, no termination fees, and pay only for what you use. That last point makes a huge difference here when you consider that not all of the "supporting" environments need to be up and running 24/7.

The best hour you can spend to refine your own data strategy and leverage the latest capabilities on Azure to accelerate your road map.

If your development team are co-located (or at least all in the same time-zone), and only need to access their instances during office hours, on week days only, then you can achieve around a 75% saving on that monthly cost (8 hrs x 23 working days v.s. 24 hrs x 31 days). A similar mindset could be applied to other environments by looking at their own usage patterns, the net result being a significant reduction in your consumption and associated billing.

Automating the cost savings

The good news doesn't stop there - Microsoft make the process of switching Azure Analysis Services on and off very easy by providing Pause and Resume functionality. When an instance is paused, no charges are applied. This means your model definitions, permissions and settings are all still retained, but the service is unavailable for use, and therefore incurs no running costs. However, it's a simple button-click action in the portal to get things back up and running again.

Additionally, the action of pausing and resuming instances can also be scripted using Powershell, made very easy by the cmdlets Suspend-AzureRmAnalysisServicesServer and Resume-AzureRmAnalysisServicesServer in the AzureRM.AnalysisServices module, which can be installed via the Powershell gallery.

And if it can be scripted, it can be automated, either by using an Azure DevOps pipeline for orchestration (e.g. using a timer trigger and an Azure Powershell task), using an Azure Automation runbook, or your own chron/scheduler of choice. There's no right or wrong way to do this, and the preferred option should be the one that makes the most sense in the wider context of your automation/DevOps landscape.

Conclusion

Azure Analysis Services provides an enterprise-grade analytical platform with massive scale and flexibility. But, as one of the more expensive services in the Azure platform, consideration should be given to cost management, especially in multi-environment ALM scenarios. However, as a pay-as-you-go service, it is possible to massively reduce the running costs, once usage patterns are understood, and it this post has shown that it's easy to automate this using Powershell and orchestration tools like Azure DevOps.

James Broome

Director of Engineering

James Broome

James has spent 20+ years delivering high quality software solutions addressing global business problems, with teams and clients across 3 continents. As Director of Engineering at endjin, he leads the team in providing technology strategy, data insights and engineering support to organisations of all sizes - from disruptive B2C start-ups, to global financial institutions. He's responsible for the success of our customer-facing project delivery, as well as the capability and growth of our delivery team.