Skip to content
James Broome By James Broome Director of Engineering
Testing Power BI Reports using SpecFlow and .NET

In my previous post, I outlined an approach that endjin has used successfully to test Power BI Dataflows - adding quality gates and building confidence in large and complex Power BI solutions.

Despite being inherently difficult to test, the need to validate business rules and security boundaries is important, as well as the need for ensuring that quality doesn't regress over time as the solution evolves. In this post, I'll discuss the approach we have taken to test the reports themselves, by validating the calculations and data in the underlying tabular models.

Testing Reports?

As a reminder, there's no "official support" or recommended approach from Microsoft on this, so the approach discussed is based on endjin's experience and recommendation. There are no available testing frameworks for Power BI reports, we have no control over how the reports are rendered in the UI, and, at first glance, the report business logic is encapsulated within the underlying Power BI service.

However, changes in the Power BI service last year enabled the ability to connect to the underlying tabular model that powers the reports. This is possible as Power BI is built on top of Analysis Services, and the general direction of the team and product is that Power BI will eventually become a superset of Azure Analysis Services. So, over time, more and more Analysis Services features are becoming available in Power BI - last month a public preview of read/write XMLA endpoints was announced, building on the initial read-only connectivity.

The intention is that this enables a variety of tooling integrations - 3rd party profiling tools, development and deployment tooling - and test frameworks are no exception. If we can connect to the tabular model using client SDKs then we can treat it like a "normal" Analysis Services database and execute DAX queries to validate that things are as we expect.

Before we go any further with this approach, there's a couple of big caveats to point out -

  1. We're testing the tabular model only and not the visualisation layer. By validating the measures, calculated columns and data modelling are working as we expect, we're effectively testing "the business logic" in our data application, but not the "presentation layer".
  2. We don't have control over the actual DAX queries that get executed by the visualisations when the report is loaded so any tests we execute are merely "simulations" of what the report is actually doing. However, it is possible to intercept the DAX that a report is executing so those queries can be used in the tests themselves so that the simulation is as close to the real thing as possible.

That said, even with these caveats, we've found that there's huge value in writing and running tests over the tabular model to test the business logic that's been added through the data model. Whilst the visualisations themselves are also important - the right visualisation, in the right position, showing the right data points - having confidence that the calculations behind the data points are correct is what really adds confidence to a data solution.

Despite UI testing Power BI reports being hard, it's not impossible. Techniques such as bitmap comparison can be applied to finalised reports to ensure that unexpected changes don't appear and, although crude, can prove effective. However, for the purposes of this post, the approach focuses on testing at the tabular model level, so the caveats above still apply.

The tabular model

So, the tabular model behind a Power BI report is effectively an Analysis Services database. When opening a .pbix file in Power BI Desktop, a local instance of Analysis Services is started automatically. In order to find your local instance when using Power BI Desktop, the easiest way is to use DAX Studio. In the bottom right-hand ribbon you'll see the localhost:1234 server info which you can use to make an Analysis Services connection - in any 3rd party tooling, or also in a .NET test project using the ADOMD.NET SDK.

When the report is published into a Power BI Workspace, the equivalent Azure Analysis Services instance can also be connected to using the XMLA endpoint. The restriction is that the Workspace needs to have a capacity assigned to it - either Power BI Premium, or Power BI Embedded - and the capacity needs to be up and running. If so, the XMLA endpoint is available and the details can be found in the Workspace Settings panel, on the Premium tab, under Workspace Connection e.g. powerbi://api.powerbi.com/v1.0/myorg/workspace-name

Executing queries

In either of these scenarios, a connection to the Power BI tabular model can be made, meaning that a DAX query can be executed. I've written previously about how to do this in .NET over an Azure Analysis Services database, and the approach is exactly the same for Power BI, using the ADOMD.NET SDK.

Executable specifications

Now that we know that we can connect to and query the tabular model from .NET, the final question is how to structure the tests themselves.

We prefer and recommend executable specifications rather than unit tests – describing the behaviour of the system in easily understandable business language, and we can apply this approach to testing Power BI reports quite easily. In a .NET test project. this probably means using SpecFlow – a .NET test runner implementation of the BDD Gherkin language.

Taking a "feature-per-report" approach, an example specification for a Weekly Sales report might look something like this:

Feature: Weekly sales
    In order understand if we're meeting our sales targets
    As a sales manager
    I need to see weekly sales figures for the year to date

Background: 
	Given I am logged in as 'Sales Manager'
	And I am viewing the 'Weekly Sales' report
	And there is the following 'Orders' data: 
	| OrderId | CustomerId | OrderDate  | TotalAmount |
	| 1       | 1          | 2020-01-01 | 100.00      | 
	| 2       | 2          | 2020-01-01 | 50.00       | 
	| 3       | 3          | 2020-01-03 | 750.00      | 
	| 4       | 1          | 2020-01-04 | 110.50      | 
	| 5       | 4          | 2020-01-04 | 20. 00      | 

Scenario: Orders by day chart displays correctly
	Given I have made the following filter selection:
	| Filter table name | Filter value  |
	| Date           	| [Year] = 2020 |
	When I view the 'Daily Orders' chart
	Then the data should be displayed as follows:
	| Date       | Order count | Total amount |
	| 2020-01-01 | 2           | 150.00       |
	| 2020-01-02 | 0           | 0.00         |
	| 2020-01-03 | 1           | 750.00       |
	| 2020-01-04 | 2           | 130.50       |
Programming C# 12 Book, by Ian Griffiths, published by O'Reilly Media, is now available to buy.

Looking at this in a bit more detail, the Background section does the following in the associated C# steps behind the feature:

Given I am logged in as 'Sales Manager' sets up the user context for the tests. This might add a filter expression to the DAX expression that will ultimately be executed, or select the relevant username/password values to use in the connection string. Or, Row Level Security rules defined in the tabular model might be used as well, by specifying the Roles parameter when connecting.

And I am viewing the 'Weekly Sales' report sets up the report name, which might be used to target the specific piece of DAX logic that we want to test (there might be similarly named charts across various reports, so this adds another level of context to the tests).

And there is the following 'Orders' data adds a validation step before the scenarios are executed that the data in the model is as we expect. Using the read/write XMLA endpoint, in theory we could use the tests to push data into the underlying data source and trigger a refresh of the report model. However, this may not be appropriate depending on the complexity of the solution architecture. More simply, it could execute a DAX query over a particular table (in this case Orders) in the tabular model and assert that the data is as we expect (and if not, fail the tests).

The Scenario section describes one example of testing a specific report visualisation. Remember - we're not actually testing the report UI, but the parts of the data model that are being used by a particular report visualisation - in this case the "Daily Orders" chart.

Given I have made the following filter selection shows a pattern that could be followed to add scenario-specific filters to the underlying DAX expression - for example when a user interacts with another visualisation or uses a slicer control to filter what's being shown. The values in the data table can be used to construct the DAX filter expression.

When I view the 'Daily Orders' chart specifies the name of the specific visualisation that we're testing. Along with the report name already set in the Background section, we can now target a specific DAX expression that we want to test. Ideally this will have been captured when running the report to ensure the tests are as close to "real life" as possible.

Discover your Power BI Maturity Score by taking our FREE 5 minute quiz.

Finally, Then the data should be displayed as follows provides a data table of results that can be validated against the results set that is returned from the DAX query that is executed - essentially the main assertion of the test.

Putting it all together, the scenario might execute a DAX expression like this, which we can use to return the data that would be surfaced in the chart visualisation:

EVALUATE(
	SUMMARIZECOLUMNS (
		'Date'[Day],
		FILTER(Date, 'Date'[Year] = 2020),
		"Order count", 'Measures'[CountOfOrders],                        
		"Total amount", 'Measures'[TotalAmount],                        
	)
)

Finally, what's great is that this feature is entirely reusable as a template for other report scenarios as they're added, as all of the steps are parameterised, or driven from the data tables.

Conclusion

Whilst testing Power BI Reports isn't something that many people think about, it's critical that business rules and associated data modelling steps are validated to ensure the right data insights are available to the right people across the organisation. Whilst there's no "obvious" way to test reports, this post explains that by connecting to the underlying tabular it is possible to execute scenario-based specifications to add quality gates and build confidence in large and complex Power BI solutions.

James Broome

Director of Engineering

James Broome

James has spent 20+ years delivering high quality software solutions addressing global business problems, with teams and clients across 3 continents. As Director of Engineering at endjin, he leads the team in providing technology strategy, data insights and engineering support to organisations of all sizes - from disruptive B2C start-ups, to global financial institutions. He's responsible for the success of our customer-facing project delivery, as well as the capability and growth of our delivery team.