TL;DR - This series of posts shows how you can integration test Azure Functions projects using the open-source Corvus.SpecFlow.Extensions library and walks through the different ways you can use it in your SpecFlow projects to start and stop function app instances for your scenarios and features.
In the first post in this series, we introduced the Corvus.SpecFlow.Extensions library. In this post, we're going to take a look at the simplest way of using it to start functions apps for testing purposes, which is to use the provided step bindings.
The Corvus.SpecFlow.Extensions project contains a
Given step definition for the following pattern:
If you include steps that match this pattern in your scenario, they will cause the functions defined in the specified project to be run, with HTTP functions listening on the specified port. If your function doesn't actually have any HTTP endpoints you can supply a dummy value for the port. Runtime will most likely be either
netcoreapp3.1 (for Functions v3) or
netcoreapp2.1 (Functions v2).
The project to run is currently resolved based on some assumptions about how your solution folder is structured. Essentially, it assumes that all of your projects, including the test project, are contained in a folder called Solutions. The value you pass for the project parameter is then assumed to be directly under that Solutions folder. (Note - there is currently an open issue in GitHub to make this more flexible - contributions are welcome!)
As well as bindings for these steps, there's an additional AfterScenario hook that goes with them to tear down the functions instances they start. You can start multiple functions in a single scenario using these bindings if necessary.
Viewing function output
Once the test run is complete, output from the functions app can be seen in the Test Detail Summary. In Visual Studio, this is visible in the Test Explorer by selecting the scenario that's been executed and clicking the scenario that's been selected:
Clicking the link "Open additional output for this result" will show SpecFlow's standard output capture:
As you can see from the screenshot above, this starts with the output from the
BeforeScenario binding showing the solution and runtime location. If starting the function failed for some reason, you'd most likely see the reason here.
This is followed by SpecFlow's standard per-step output. Finally the output from the
AfterScenario binding is shown, which is where the StdOut and StdErr for each function is added.
Note that the log shown in this window is frequently a truncated version of the whole. If this is the case, you'll see a message explaining how to access the full log by copying and pasting into another tool.
Advantages to this method
Using step bindings in this way makes it crystal clear to the developer what's going on as part of their spec. You can easily see which functions are being run and on what ports.
Disadvantages to this method
Whilst it's nice for developers to see exactly what technical setup is taking place, this does go against the goals of Behaviour Driven Development. Specifically, we should be striving to make the feature readable in the end user's language. When testing an API using a BDD spec, you can make a case that the end user whose language we should be using is a technical one - the consumers of APIs are most likely to be developers - but even so, this is an overly technical step to include in your scenarios.