Day 4 began with a code review lead by Mike. The aim of the session was to come up with a plan for re-factoring Endjin's membership framework for user authentication and authorisation.
The library uses Users, Groups, Roles and Permissions to provide Role-based access management, which offers granular control over the tools and web pages each Role can access.
Mike wanted to discuss the structure and performance of the framework, to see what improvements could be made without affecting its public API. He talked us through his initial thoughts on the how the solution could be better organised. There was general agreement that certain functions should be pulled out into a new layer, to keep the structure clean.
We then took a step back to look at the performance implications of the way in which the framework carries out authorisation. The existing implementation is based on Windows Azure Table Storage, which is not ideally suited to modelling the relationships between Users, Groups, Roles and Permissions which affect authorisation, as it's essentially a key-value store.
The authorisation process involves multiple round trips to these Azure Table Storage structures. The effects of this are most apparent when a new plugin is added to a site. Vellum, Endjin's Azure-based CMS, supports extensibility via a plugin infrastructure.
Each plugin is able to define authorisation demands for the features it supports. The security framework used by Vellum requires that a plugin create all the custom permissions it may need when it is registered, causing a large number of data store requests.
Matthew sketched out an optimisation to the authorisation process which would reduce the amount of requests made to the data store. He suggested that we limit the amount of requests by caching the full set of data relating to a particular user. A further simple optimisation is to perform an HTTP HEAD request to see if any data has changed.
The group moved on to look at the authentication process. An improvement was suggested which would allow the library to integrate with third party tools, so that users wouldn't have to provide a new password for the site served by the library.
Returning to the issue of storage, Mike, Matthew & Howard discussed the membership framework's storage functionality and supported data store types.
To enable a migration to a more suitable data store, and add an additional layer of abstraction which will open up the framework to multiple data store types in future, the framework will be refactored the Repository pattern. This has already been implemented for other Endjin libraries.
A Repository is a class which persists and remerges arbitrary types, acting as an abstraction over storage and shielding the rest of the application from details such as the Connection String. A Repository class typically makes use of Expression Trees formed from Expressions which represent objects, which in turn represent functions. This allows the function to be turned into a data query (e.g. SQL) at runtime without the Repository having to be aware of the types that it is working with.
Finally, Matthew, Mike & Howard discussed a migration strategy to manage the effect on users when these changes were released. A simple approach was devised, where an unregistered Controller could carry out the upgrade when required.
The review rounded off with a decision about how the changes should affect the NuGet package version for the library: because there were major structural alterations, it should be a major version change, although no change to the API was involved.
By the end of the session, a set of optimisations which should provide significant performance benefits had been decided on, and my head was spinning somewhat from all the new concepts I'd come across.
Behaviour Driven Development
My task, guided by Mike, was to create a set of tests for the AtomTask class I'd made earlier in the week. We made sure that the Projects used for testing had references to the necessary assemblies, using the neat trick of 'Manage NuGet Packages for Solution', choosing the oackage, then selecting all applicable Projects in the solution.
The first step to creating an executable specification was to create a SpecFlow .feature file. Through the .feature file, SpecFlow lets you describe test scenarios using a set of natural language statements, with the following format:
Once a scenario has been set out, the library lets you generate a C# 'steps' class which contains the outlines of the corresponding tests for each step of a scenario.
Each natural language scenario step in the .features file is represented as a method in the steps class. Numbers and quoted phrases in the .features file are interpreted as method parameters in the steps class. It is even possible to pass in a table-like range of values:
These values are used to create a set of objects in the steps class using
Mapping between the natural language steps in a .feature file scenario and C# step definitions in the steps class is achieved through the quoted sentence above the method definition in the steps class.
However, we found that the library could not distinguish between the following scenario steps, and it was necessary to change the order of the parameters of the second step to disambiguate them.
As we wrote the tests, I saw how Moq came into the Behaviour Driven Development process.
As the name suggests, Moq is used to mock up objects which are required by the class under test, without having to worry about actually creating other complex types. We used Moq to simulate a PageStore object.
The Steps class uses a SpecFlow ScenarioContext object which holds an in memory representation of the current state of the elements under test – in each step definition you can store the state of an object to the ScenarioContext, along with a reference to a String which acts as a key.
I worked on the tests independently for a while, and then Mike helped me complete the final test scenario which checked whether the entries in the Atom Feed had the correct values.
We created a simple helper class with string or DateTime properties, corresponding to the properties we'd be checking in the AtomEntries, and created several instances of this class by reading in the table-like list of values specified in the Features file.
In order to iterate through the AtomEntries in the AtomFeed and check their properties, we called ToList() on the AtomFeed. This enabled us to use a loop which carried out a NUnit 'Assert.AreEqual' test for each property under test, comparing it to the equivalent property of the equivalent instance of the helper class.
Because Argotic's AtomEntry class uses complex types such as AtomTextConstruct, we had to convert these values to Strings before carrying out a comparison.
Once the tests were complete, they proved their worth by discovered an error in the Task under test – I'd been setting the lastUpdated property rather than the datePublished property of the AtomEntry.
In the process of writing these tests, I learnt some more handy shortcuts:
- You can select a vertical range covering multiple lines, by holding Alt while clicking then dragging the mouse over the area. This is known as block selection and is useful if you want to paste something into the same position in a set of lines – e.g. if you have forgotten to put an opening quote mark on a list of strings.
- Using ReSharper, you can move a method up above another method by just clicking on the method definition, clicking Ctrl + Shift + Alt, and the Up Arrow key.
- Similarly, you can re-order parameters with ReSharper using Ctrl + Shift + Alt and the Left or Right Arrow keys.
This was a very satisfying day, with a lot of information to absorb.