Skip to content
Matthew Adams By Matthew Adams Co-Founder
Prototyping and Proof of Concept: Charting and Data Visualization on the Microsoft Platforms

This post contains some observations on the way we went about building them, and notions of proof-of-concept that emerged.

Getting started

There have been a few questions niggling away at the back of our minds for a while about data visualization and interaction with next-generation user interface technologies and interesting form-factors.

Typical components in this space1 give you a huge amount of control over the graphical elements of visualization, but limited scope for interaction, with very complex 'knob and lever' APIs to control it all. They also tend to integrate pretty poorly with the rest of the visualization stack.

We wanted to know what kind of interactions work in these emerging form factors, and where there are challenges (e.g. with my great fat fingers obscuring the information I'm looking at as I interact with it on my phone), and how we can encapsulate the best practices we find to simplify APIs, and make the developer experience a lot cleaner.

This sounds like an ideal opportunity for a Proof of Concept. But what is a PoC? And what is it not? Different people use the terminology in different ways, but here's how we think about it.

A PoC is not a quick hack

I've been the lead on a number of projects that have been called PoCs over the years, from the Tesco Windows 7 PoC2 to Milliman's Modern UI Surface application.

One thing that has distinguished our approach to them is the view that a PoC is (very close to) production code.

As a result, we use all the same disciplines as we use to develop any production code. We develop our user personas (in this case the developers and end-users of various kinds); identify the ways in which they might derive value from the components, their biases, preferences and experience; elaborate user journeys, stories and success criteria; establish a solution architecture, build and test at a unit and integration level against the success criteria.

Does this differ in any way from a regular project, then? Yes and no. In a PoC, we tend to concentrate even more on the success criteria, and really focus in on what we are trying to achieve: the keyword is "Proof".

A PoC is not for the end users

The audience is also different. A PoC is targeted at an internal audience, to resolve ambiguities, answer open questions, and mitigate risk, whereas a Product, is, ultimately, targeted at its end users.

A PoC is not just a Demo

A lot of "proof of concept" applications don't prove anything much. They're usually demos, and, for some reason, they seem to incorporate as many of these techniques as possible. We've all done it. They look great, but it's best not to look behind the curtain. As long as they do the job in the video (or on the stage).

A PoC is not a Spike

We use the term "spike" to mean a quick, disposable exploration of a possible technical solution. It will be a normal part of the product development process, and the resources to do it will be included by the delivery team in their regular estimates (so it isn't going to be a big piece of work, either.)

Development teams are usually spiking all the time, whether it is acknowledged or not. In an ideal world, you know when you are doing it, and when you are writing real production code, but that's another part of the story.

A PoC might, or might not, be a prototype

Another term often used in this space is "prototype". We tend to steer clear of the word to avoid confusion.

Sometimes a prototype is a proof-of-concept (think of a pre-production prototype build of a product that may not be painted up properly, or housed in the final casing, but illustrates exactly how the final system will be built).

Sometimes a prototype is a Spike ("I'm just printing these gears out on the 3D printer to see if it works...").

And sometimes it is a Demo (a car manufacturer and their concept cars, for example.)

3 Tips for a good PoC

1) Understand what you are trying to test

Think around the problem. Don't just frame your question in the form of the simplest, most technically-orientated success path (e.g. "Can we get the social graph data from TwitFace's ReST API?"), but in terms of the business problem ("Can we find social links between our existing customer base and our prospects?"), including ongoing cost considerations ("How much developer effort would it be if LinkedOut versioned their API?") and risk mitigation ("How could we continue to offer the service if FaceOff removed their developer API?")

2) Understand how to measure the results

Consider what instrumentation is required to gather appropriate metrics. Are those metrics just "code" based - or do you need to include wider business considerations (TCO metrics, maintenance costs etc.) Make it clear what you are not measuring - the known unknowns.

3) Make it "just enough" but "excellent"

Don't leave out something important. Don't include anything not strictly necessary. There's no such thing as a nice-to-have in a PoC. Implement it as you would your production systems, in terms of quality. Don't fail to provide good data because you deliver a golden-path piece of hackery that falters as soon as the first sprinkling of real-world failure conditions is introduced into the mix.

How did this apply to our chart work?

We boiled this down to a few key metrics:

1) What percentage of code reuse could we achieve across our target platforms?

This was the largest part of the project - we had to implement a reasonable cross-section of desired functionality to determine what level of code reuse we could achieve.

A number of solutions were examined, including the use of portable libraries, .NET4.0 libraries consumed by .NET4.5, and source-code-level reuse.

The best hour you can spend to refine your own data strategy and leverage the latest capabilities on Azure to accelerate your road map.

We decided to calculate separate metrics for source-code and binary reuse.

2) What percentage of code reuse could we achieve in client applications

We established that copy-and-paste reuse would be the only real solution for client developers in XAML, as there is no better technical solution on offer. Given that, we set our metric to be %age of XAML nodes (excluding specific client technology types) that differed between versions for the same result.

We also determined what %age of those changes were required to support device-specific features such as touch, or different form-factors.

In addition, we had a soft "eyeballing" metric for similarity of rendering (bearing in mind the difference in form factors, which was one of the things we were exploring).

Secondly, we looked at the ability to reuse view models and data models across platforms. We picked a few representative usage models (some of which came from client requirements), and determined the %age of code reuse in those models.

3) What is the rendering effort per point/per chart type on the different form factors

We decided to use standard performance counters for in-application CPU usage, and also measured frame rates as we scaled the numbers of points in various chart types.

We set metrics for both rendering a single frame (e.g. a static chart with lots of points) and updating data (e.g. a dynamic chart on the other end of a data stream).

4) How complex is our API?

We set metrics for code quality of the code/xaml required to produce the charts in our standard scenarios.

Programming C# 12 Book, by Ian Griffiths, published by O'Reilly Media, is now available to buy.

We then compared those against the code/XAML required for common customizations (e.g. colours, lines, fill patterns).

We compared our results for the same metrics for popular 3rd party components.

Don't forget to review!

At the end, the metrics are only any use if you review the decisions they were meant to inform.

In this case, we've used these results to determine how we shape code for reuse, and to inform the need for more work on the relative performance of WP8, WinRT and WPF in similar rendering scenarios.

Matthew Adams on Twitter


  1. There are already a bunch of different solutions for data visualization across the Microsoft stack, like the Modern UI Data Visualization Toolkit, theMicrosoft Research Dynamic Data Display Project and others, not to mention the myriad commercial offerings. We've got another one. The feature list is here. Any ideas what we should do with it? (Stick it in the bin!)

  2. by coincidence we demonstrated that at same Microsoft PDC where Windows Azure was publicly announced - I remember sitting backstage as Ray Ozzie stalked up and down rehearsing his speech. #namedropping

Matthew Adams

Co-Founder

Matthew Adams

Matthew was CTO of a venture-backed technology start-up in the UK & US for 10 years, and is now the co-founder of endjin, which provides technology strategy, experience and development services to its customers who are seeking to take advantage of Microsoft Azure and the Cloud.