Skip to content
James Broome By James Broome Director of Engineering
Testing Power BI Reports with the ExecuteQueries REST API

In a previous post, I wrote about an approach for testing Power BI reports using SpecFlow to define and run DAX queries via .NET SDKs against the underlying tabular model. We've successfully used this approach in .NET based environments to add quality gates to our report deployment process - ensuring that iterative report model development doesn't introduce regression issues.

But whilst Power BI is a Microsoft-based tool, it's clearly ubiquitous enough to be prevalent in environments where .NET isn't used - either due to team skill set or wider technology choices. At the time, the only way to interact with the tabular model (i.e. treating it like a database), was via the .NET SDKs. However, the recent announcement from the Power BI team about the new ExecuteQueries REST API removes these limitations, meaning it's now possible to connect to a Power BI Tabular model and run a DAX query over your data from any programming language.

With that in mind, and driven by a data platform environment where .NET wasn't present, we've since re-implemented the same BDD-based approach to testing Power BI logic using Javascript - running in Node.js and using the Cucumber.js testing framework. This post looks at how this is possible using the ExecuteQueries REST API, and how it differs from the previously outlined solution using the .NET SDKs.

Testing Reports?

As a reminder, there's no "official support" or recommended approach from Microsoft around testing Power BI reports, so the approach discussed is based on endjin's experience and recommendation. There are no available testing frameworks for Power BI reports, and we have no control over how the reports are rendered in the Power BI interface.

Discover your Power BI Maturity Score by taking our FREE 5 minute quiz.

So before we go any further, just as with the .NET-based approach, there's a big caveat to point out -

We're going to be testing the tabular model only and not the visualisation layer. By validating the measures, calculated columns and data modelling are working as we expect, we're effectively testing "the business logic" in our data application, but not the "presentation layer".

That said, even with this caveat, we've found that there's huge value in writing and running tests over the tabular model to test the business logic that's been added through the data model. Whilst the visualisations themselves are also important - the right visualisation, in the right position, showing the right data points - having confidence that the calculations behind the data points are correct is what really adds confidence to a data solution.

The ExecuteQueries API

The biggest difference in this approach compared to the .NET version is that we're going to rely on the new ExecuteQueries endpoint that is now available in the Power BI REST API. The ExecuteQueries API has some pros and cons associated with it compared to using the .NET SDKs, which impact what we can and can't do from a testing perspective.

  1. The REST API only works over the Power BI service - i.e. when a report has been published into a Power BI Workspace. There's no option to use the REST API over a Power BI report (.pbix) file that's stored on your local machine. This means that the testing process can't be run over a report in Power BI Desktop, but only once it's available inside a workspace.
  2. However, the ExecuteQueries API is available to all workspaces and doesn't need any form of Premium Capacity to be provisioned. The .NET based approach relied on a Premium-only feature if the report was to be tested inside a workspace, which introduced additional costs around testing if not already provisioned.
  3. Whereas the .NET SDK based approach requires C#/.NET in order to connect to the tabular model, using the ExecuteQueries REST API means that we can make this connection using any programming language/testing framework.

The table below summarises the pros and cons of both the .NET SDK based approach and the ExecuteQueries API based approach to connecting to the tabular model to run tests.

Feature .NET SDK ExecuteQueries REST API
Supported programming languages C#/.NET Any
Connect when running in Power BI Desktop Yes No
Connect when running in Power BI Service Yes Yes
Requires Premium workspace Yes No

Executing queries via the API

The ExecuteQueries API is well documented, and essentially takes a DAX query expression as an input, targeting a specific Power BI dataset, returning a set of tabular results. Based on this, the tests just need to build up the right query based on the scenario that your testing, and point the request at the right dataset.

DAX queries are passed into the API via the body of a POST request, as follows:

{
  queries: [
    {
      query: '\n' +
        'EVALUATE(\n' +
        '   SUMMARIZECOLUMNS (\n' +
        `       FILTER('Date', 'Date'[WeekNumber] = 3),\n +
                "Total Sales Count", [Total Sales Count]\n` +
        '   )\n' +
        ')'
    }
  ],
  serializerSettings: { includeNulls: true }
}

And the result is a collection of values, according to the specific query e.g.

 [ { '[Total Sales Count]': 353 } ]

Executable specifications

Now that we know that we can connect to a tabular model (dataset) and execute a query, the final question is how to structure the tests themselves.

We prefer and recommend executable specifications rather than unit tests – describing the behaviour of the system in easily understandable business language, and we can apply this approach to testing Power BI reports quite easily. In a .NET based solution, this meant using SpecFlow – a .NET test runner implementation of the BDD Gherkin language. But without that restriction, you're free to use any test runner/framework in any language of your choosing - be it Javascript, Python or anything else. We've implemented a testing framework around Cucumber.js which adheres to the same BDD Gherkin language as SpecFlow, so the structure of the scenarios remains largely the same.

Taking a "feature-per-report" approach, an example specification for a Weekly Sales report might look something like this:

Feature: Weekly Sales report
    In order understand if we're meeting our sales targets
    As a sales manager
    I need to see aggregated weekly sales figures

Background: 
	Given I am logged in as 'Sales Manager'
	And I am viewing the 'Weekly Sales' report

Scenario: Calculate number of sales for a week
    Given the following filter is applied
        | Filter table | Filter expression |
        | Date         | [WeekNumber] = 23 |
    When I query the 'Total Sales Count' measure
    Then the result should be '353'

Scenario: Calculate sales value for a week
    Given the following filter is applied
        | Filter table | Filter expression |
        | Date         | [WeekNumber] = 23 |
    When I query the 'Total Sales Value' measure
    Then the result should be '4034.30'

Looking at this in a bit more detail, the Background section does the following in the associated C# steps behind the feature:

Given I am logged in as 'Sales Manager' sets up the user context for the tests. This might add a filter expression to the DAX expression that will ultimately be executed, or select the relevant credentials when connecting to the Power BI API (relying on Row Level Security rules defined in the tabular model).

And I am viewing the 'Weekly Sales' report is used to target the correct Power BI Dataset ID when connecting to the API. The ExecuteQueries API endpoint is specific to a dataset over which the queries will be executed.

The Scenario section describes a couple of examples of testing specific measures in the model. Remember - we're not actually testing the report UI, just the data model that supports it.

Given the following filter is applied shows a pattern that could be followed to add scenario-specific filters to the underlying DAX expression. The values in the data table can be used to construct the DAX filter expression dynamically.

When I query the 'Total Sales Count' measure specifies the name of the specific measure that we're testing. Along with the dataset ID that we've retrieved from the Background section, and any filters we want to apply, we can now construct the specific DAX expression that we want to test.

Finally, Then the result should be '353' provides the expected value that can be validated against the result that is returned from the DAX query that is executed via the API - essentially the main assertion of the test.

Power BI Weekly is a collation of the week's top news and articles from the Power BI ecosystem, all presented to you in one, handy newsletter!

Putting it all together, the scenario might execute a DAX expression like this, which we can use to return the value of the measure:

EVALUATE(
    SUMMARIZECOLUMNS (
        FILTER('Date', 'Date'[WeekNumber] = 3),
        "Total Sales Count", [Total Sales Count]
    )
)

Finally, what's great is that this feature is entirely reusable as a template for other scenarios as they're added, as all of the steps are parameterised, or driven from the data tables.

Conclusion

Whilst testing Power BI Reports isn't something that many people think about, it's critical that business rules and associated data modelling steps are validated to ensure the right data insights are available to the right people across the organisation. Whilst there's no "obvious" way to test reports, this post explains that by connecting to the underlying tabular it is possible to execute scenario-based specifications to add quality gates and build confidence in large and complex Power BI solutions. And with the new ExecuteQueries REST API endpoint, this approach is no longer limited to .NET-based tests, but can be implemented in any language/framework that you want.

James Broome

Director of Engineering

James Broome

James has spent 20+ years delivering high quality software solutions addressing global business problems, with teams and clients across 3 continents. As Director of Engineering at endjin, he leads the team in providing technology strategy, data insights and engineering support to organisations of all sizes - from disruptive B2C start-ups, to global financial institutions. He's responsible for the success of our customer-facing project delivery, as well as the capability and growth of our delivery team.