Adventures in Dapr: Episode 3 - Azure Storage Queues
At the end of the previous episode we had migrated the secrets management of the Dapr Traffic Control sample to use Azure Key Vault, as part of our experiment to evolve an application from using infrastructure-based services to cloud platform services. The plan for this episode is to complete that process by migrating away from the last remaining infrastructure service, that being the MQTT-based Mosquitto.
MQTT is a lightweight, publish-subscribe, machine to machine network protocol. Eclipse Mosquitto is an open source message broker that implements the MQTT protocol.
The application uses MQTT as the communications channel between the simulated speed cameras and the Traffic Control Service. The latter utilises Dapr's Bindings building block to implement receiving those messages.
A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
Dapr has a wide range of Bindings implementations, with varying capabilities (e.g. send only, receive only and both). The MQTT binding provides bi-directional support, although the current usage requires receive only. This is because the simulated cameras are treated as edge devices and are not written as Dapr-aware applications, they simply send messages directly to the Mosquitto service in response to detected vehicles.
From the documentation above, we can see that there are a number of Azure services which have Dapr bindings implementations. For the purposes of this episode I chose to use Azure Storage Queues:
- Implementation is marked as 'Stable'
- Supports bi-directional connections
- Adds another Azure service into the mix that we aren't already using
Update the infrastructure
As in previous episodes, we'll start with updating our infrastructure-as-code to provision an Azure Storage account that will host the queues we require.
We need to update our components.bicep
module to add the required Azure Storage Account resources:
- First we need some additional parameters:
param storageAccountAccessKeySecretName string param storageAccountName string param entryCamQueueName string param exitCamQueueName string
- Add the storage account:
resource storage_account 'Microsoft.Storage/storageAccounts@2021-09-01' = { name: storageAccountName location: location sku: { name: 'Standard_LRS' } kind: 'StorageV2' }
- The application require 2 queues, one to receive messages when vehicles enter the traffic control zone and another for when they leave. We can setup these via our Bicep template too:
resource storage_queues 'Microsoft.Storage/storageAccounts/queueServices@2021-09-01' = { name: 'default' parent: storage_account } resource entrycam_queue 'Microsoft.Storage/storageAccounts/queueServices/queues@2021-09-01' = { name: entryCamQueueName parent: storage_queues } resource exitcam_queue 'Microsoft.Storage/storageAccounts/queueServices/queues@2021-09-01' = { name: exitCamQueueName parent: storage_queues }
- Finally we'll need to make the storage account access key available via our Key Vault:
resource storage_access_key_secret 'Microsoft.KeyVault/vaults/secrets@2021-11-01-preview' = { name: storageAccountAccessKeySecretName parent: keyvault properties: { contentType: 'text/plain' value: storage_account.listKeys().keys[0].value } }
The new parameters we've added to components.bicep
need to be reflected in main.bicep
:
module components 'components.bicep' = {
<...>
params: {
<...>
storageAccountName: storageAccountName
entryCamQueueName: entryCamQueueName
exitCamQueueName: exitCamQueueName
storageAccountAccessKeySecretName: storageAccountAccessKeySecretName
}
}
Along with some additional variables to set the values we need:
var storageAccountName = '${prefix}storage'
var entryCamQueueName = 'entrycam'
var exitCamQueueName = 'exitcam'
var storageAccountAccessKeySecretName = 'StorageQueue-AccessKey'
Now run the deploy.ps1
script with the same value for the -ResourcePrefix
parameter used previously and you should see the new storage account with the 2 queues are provisioned, as well as an additional secret in the Key Vault.
Code Changes
Unlike the previous episodes, switching from using MQTT to Azure Storage Queues is going to require a code change. As mentioned above the console app that runs the simulation currently talks to Mosquitto directly (i.e. it doesn't use Dapr), so the change requires more than just updating Dapr configuration.
The Traffic Control Service, which processes the messages sent by the simulation app, does use Dapr so it can be switched over to Azure Storage Queues with a mere configuration change.
This section will focus on updating the simulation console app, however, we have a couple of options for how to tackle this:
- Use the Azure SDK to send messages to the queues
- Use the Dapr programming model to send messages to the queues
Given that this series is primarily about Dapr and not Azure development, let's choose the latter. However, we should discuss some caveats to this choice when considering a 'real' implementation with cameras (rather than the simulation app we have here):
- The console app is simulating the cameras which are considered 'edge' devices, meaning:
- they are often not directly connected to the infrastructure running the main services
- there maybe hardware/processing constraints
- Using the Dapr programming model means we will need a Dapr runtime alongside each camera, which will require additional compute resources (e.g. memory & processing)
- The Dapr runtime for each camera would be standalone, unlike the Dapr 'fabric' formed by each of the Dapr runtimes supporting the main services
- With suitable network connectivity the cameras could be connected to the main 'fabric' via a VPN or other virtual mesh networking technology, but given the use case it seems more likely that the cameras would be treated as being outside of the core network.
With all that said, it will be an interesting exercise to update the console app and allow it to make use of Dapr for sending its messages. After this work is done, should we wish to change the messaging technology again, then no code changes ought to be necessary.
The first step is to add a reference to Dapr.Client
in the console app's project:
cd src/Simulation
dotnet add package Dapr.Client
Next we need to implement a Dapr-flavoured version of the proxy class used to communicate with the Traffic Control Service. We'll leave the MQTT implementation in-tact and create a new file for the class in src/Simulation/Proxies/DaprTrafficControlService.cs
:
namespace Simulation.Proxies;
using Dapr.Client;
public class DaprTrafficControlService : ITrafficControlService
{
private readonly DaprClient _client;
public DaprTrafficControlService(int camNumber)
{
_client = new DaprClientBuilder().Build();
}
public async Task SendVehicleEntryAsync(VehicleRegistered vehicleRegistered)
{
var eventJson = JsonSerializer.Serialize(vehicleRegistered);
await _client.InvokeBindingAsync("entrycam", "create", eventJson);
}
public async Task SendVehicleExitAsync(VehicleRegistered vehicleRegistered)
{
var eventJson = JsonSerializer.Serialize(vehicleRegistered);
await _client.InvokeBindingAsync("exitcam", "create", eventJson);
}
}
This registers the Dapr client and uses it to send requests to the existing entrycam
and exitcam
Dapr components - which thus far have only been used by the Traffic Control Service.
Finally we update line 6 in Program.cs
to swap-out the existing MQTT implementation with our new Dapr one:
<...>
for (var i = 0; i < lanes; i++)
{
<...>
var trafficControlService = new DaprTrafficControlService(camNumber);
<...>
}
Running the Simulation App
With the code change complete, we can try running the console app - now that it is using Dapr we need to launch it as a self-hosted Dapr application:
dapr run `
--app-id simulation `
--dapr-http-port 3603 `
--dapr-grpc-port 60003 `
--config ../dapr/config/config.yaml `
--components-path ../dapr/components `
dotnet run
If we try running it now, we'll get errors about not being able to bind to its components as we haven't yet updated the Dapr component configuration.
Update Dapr components
Whilst the above changes don't introduce any new Dapr components there are some changes needed to existing ones:
- Configure the
entrycam
andexitcam
components to use Azure Storage Queues rather than MQTT - Ensure the newly 'Dapr-ised' simulation app is granted access to the necessary components
The documentation for the Azure Storage Queue binding is available here.
Replace the bindings.mqtt
spec block in entrycam.yaml
- replacing <storage-account-name>
with the name of the storage account you provisioned above:
spec:
type: bindings.azure.storagequeues
version: v1
metadata:
- name: storageAccount
value: <storage-account-name>
- name: storageAccessKey
secretKeyRef:
name: StorageQueue-AccessKey
- name: queue
value: entrycam
Do the same in exitcam.yaml
:
spec:
type: bindings.azure.storagequeues
version: v1
metadata:
- name: storageAccount
value: <storage-account-name>
- name: storageAccessKey
secretKeyRef:
name: StorageQueue-AccessKey
- name: queue
value: exitcam
In both cases above we are referencing a Dapr secret to obtain the access key for authenticating to the storage account. Our Bicep changes above have already stored that value in the Key Vault we added in episode 2, so we just need to tell the component which secret store to use - recall that we do this using the auth
setting:
auth:
secretStore: trafficcontrol-secrets-kv
Add this to both entrycam.yaml
and exitcam.yaml
to wire-up the Key Vault secret store.
The final change for these 2 components is to allow the simulation
app-id to use them:
scopes:
- trafficcontrolservice
- simulation
Our console app is, indirectly, also going to need permission to use the secrets-envvars
and secrets-keyvault
components, so that it can lookup the secret values referenced by entrycam
and exitcam
. As above, this is accomplished by adding the simulation
app-id to the scopes
list in secrets-envvars.yaml
and secrets-keyvault.yaml
.
Testing our efforts
We are now ready to test our changes - if you've not yet made the changes yourself, you can follow along using the blog/episode-03
branch in the supporting GitHub repo.
In the infrastructure section above, the output from deploy.ps1
included some PowerShell that will set the environment variables we need to authenticate to the Key Vault. (For more information about this setup, refer to the previous episode)
Copy/paste and execute those 3 lines into the terminal window that you are going to launch the solution from:
$env:AZURE_CLIENT_ID = "<app-id-guid>"
$env:AZURE_CLIENT_SECRET = "<password>"
$env:AZURE_TENANT_ID = "<tenant-id-guid>"
If you run the simulation app before starting any of the other services, you can use the Azure Portal to see messages stack-up in the queues.
As in the first post you can use the run-all-self-hosted.ps1
script to easily launch the other services - if you've not done this before refer to the earlier post.
You can also refer to the 'Testing our changes' section of a previous post for what to look out for when running the solution.
If things aren't working then the 'Troubleshooting' section of an earlier post covers some common issues.
Review
If everything worked you should have seen the sample app running exactly as it did at the end of the previous post, however with our efforts we have achieved the following:
- Replaced the MQTT infrastructure service with the Azure Storage Queues platform service
- Experimented with using the Dapr programming model to abstract away the messaging implementation
The plan for the next episode is to get the services running as a collection of containerised applications.