Skip to content
Jonathan George By Jonathan George Software Engineer IV
NDC London Day 2 Retrospective - Full Stack, from SPAs to high performance .NET Core via Websockets

So, another packed day at NDC has completed and following on from my day 1 retrospective, here's a rundown of my day.

The State of Vue.js in 2020

I had intended to start the day with Troy Hunt's "The Internet of pwned things" talk, but changed my mind at the last minute. At endjin, we're currently considering adopting Vuejs as our preferred UI framework - at least for our internal apps, so I decided it was worth finding out what's coming up. Hence my last minute decision to listen to Gwen Faraday talk about the state of Vue in 2020 and what's coming up in Vuejs 3.

The talk started with a bit of background, a look at the origins of Vue and the influences it takes from Angular, React and even jQuery. Although it hasn't quite caught up with the big boys yet - Angular and React are still ahead in terms of all the metrics, both good and bad, by which we measure these things - Vue is relatively easy to learn and straightforward to add to existing sites. This means that common use cases are for beginner projects, as well as for rapid prototyping of new ideas as well as more standard production use cases. This matches our experiences at endjin, where we've seen just how quickly those familiar with the framework can put together a "working prototype".

Gwen gave us a quick tour of the CLI tools that are part of Vue and a walk through of features such as data binding (both one- and two-way), reactivity, built in support for animations and some more advanced form bindings. She then moved on to a look at the upcoming Vue 3.0 release.

Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

The big news for the next version of Vue is the addition of a new API - the Composition API. Currently, new components are constructed using the Options API, in which the different elements of a component - properties, data, methods, lifecycle methods - are grouped together. For larger components this can mean related code is spread across the class rather than being grouped together. The new Composition API changes this by providing a setup method in which you can set up the data, properties and methods you want your component to implement and keep them all in one place.

Personally I've never had the "large components" issue that Gwen described, but for me this has the huge upside that it will become much easier to work on Vuejs projects using Typescript. Whilst it's possible to do that now, it involves various additional tricks that are primarily there to allow the Typescript compiler to accept that you're writing valid code. The introduction of this new approach should see that change, smoothing the path for Typescript adoption in Vue projects. It's not a surprise that Typescript support is better in Vue 3 as the whole framework has been ported to Typescript.

Gwen also gave us a run through of other key points from the new release, such as access to low level APIs, and performance gains, and finished with a look at the ecosystem and equivalences between Vue, React and Angular libraries to support scenarios such as server side rendering, static site generation and so on.

This was a really useful talk that filled in a few gaps in my understanding of Vue and provided some helpful information on what's coming in the next few months.

Building a real-time serverless app in Blazor using AWS

My second session of the day was  from Martin Beeby, on building real time apps on AWS using Blazor. In fact I think the title was somewhat misleading as the Blazor aspects of the talk were minimal - in fact it was a lot more about the underlying detail of building a real time app - in this case, WebSockets - and how a WebSockets based app can be made to scale well on AWS.

https://twitter.com/jon_george1/status/1223193162874834944

Martin began with a run through of what he means by real time apps - things like chat apps, shared whiteboards and so on. All of these share the characteristic that the server needs the ability to push data out to the client, which is fundamentally opposed to the basic pull-based design of the web.

He then moved on to showing a typical WebSockets based real time app using SignalR and talked about the scaling problems that are inherent in WebSocket-based applications. Simply put, servers have a maximum number of TCP connections they can maintain concurrently, so there's a hard limit on the number of concurrent WebSocket clients a server can support. This means scale out is required, which brings two problems.

Firstly, a WebSocket connection is between a specific client and server, which means network load balancers need to use sticky sessions to avoid problems.

Secondly, multiple servers means that each server only knows about a subset of clients, and there is no guarantee that a server receiving a message from a client will be connected to all of the clients it may need to notify as a result.

There are a couple of standard solutions to this when using SignalR. The first, which Martin covered in his talk, is to use a Redis Backplane, which uses pub-sub messaging to ensure all the hubs receive all the messages. This is effective, but significantly increases the complexity as you now have to manage a Redis cluster as well as your SignalR servers - and obviously there's also a cost implication. The second, which wasn't covered and which is (as the name suggests) Azure specific, is Azure SignalR Service.

Martin then took us through a third solution - the Amazon API Gateway - which has the ability to accept WebSocket connections and then to map message sent over those connections to back end services. This mapping is all done via config in the AWS portal. When the incoming messages are sent across to the back end services - in this case, Lambda functions - they have a connection identifier attached which the service can use to send data back to the client via a POST to a well known URL on the gateway. This URL can also be used to GET the state of the client (i.e. are they still connected) or to DELETE them (i.e. close their WebSockets connection).

This is a really interesting approach and I can see it working well. We get to keep the advantages WebSockets bring - ability to push data, smaller payloads, and so on - and also those brought by being able to implement our backend as a set of stateless, serverless functions.

With all that said, the equivalent Azure combination of the SignalR Service and Azure Functions does look more straightforward - for example, Azure Functions has SignalR specific bindings for sending messages back to connected clients, which removes a chunk of the plumbing that is required in the AWS case.

Martin's parting message was that Amazon want AWS to be the cloud platform of choice for hosting .NET code. It's an ambitious goal, but can only bring good things for those of us working in the Microsoft ecosystem.

Wait, I have to test the front end too?

https://twitter.com/ActiveSE/status/1222847032764850177

Programming C# 12 Book, by Ian Griffiths, published by O'Reilly Media, is now available to buy.

Next up was a session on front end testing by Amy Kapernick. I was expecting this to be about the different front end options for unit and end-to-end testing, but while some of this was covered at the end - mainly in the form of Cypress - Amy actually spent more time talking about other types of testing and how they could be approached.

She started by giving an overview of Linting for both Javascript and CSS, showing ESLint and StyleLint, both of which we already use at endjin. She then moved onto the topic of accessibility testing, specifically using the WCAG guidelines.

I've been familiar with these guidelines for years now - having worked on government websites where accessibility tends to be a much larger concern than you see elsewhere, I know the pain of trying to meet some of the more challenging guidelines. Back in the day, testing these was quite a manual process, often involving pushing your code through various online tools to track down issues.

Now however, it turns out there are better ways. Amy gave us a demo of Pa11y, a tool that goes through your web pages and produces a JSON output file containing your site's issues. This is really useful, as it can be integrated into a CI pipeline allowing you to make sure that your site remains accessible as you continue to add features.

She then moved onto another tool that's new to me, Backstopjs. This is for visual regressing testing, with the goal of identifying the inadvertent breaks to UI layouts that can happen when people tweak stylesheets as part of adding new functionality.

Clearly there's a lot of discussion that can be had about how you prevent this happening, but anyone who's done any front end development in anger will be familiar with the problem and historically the only way to determine if anything's broken has been to have someone go and look at all the pages and hope they are paying attention. Backstop looks like a really useful tool to have, although not one that could be introduced right at the beginning of the development process - you'd clearly need to wait until your UI has stabilised before introducing it. It has a built in workflow, allowing you control the threshold at which it reports differences and accepting differences as the new reference points when things do change. It seems like a well thought out tool that I'd definitely look at adopting for future projects.

Finally Amy spent some time on Cypress. We've used this tool at endjin and liked it a lot (especially those of us who have used Selenium in the past!).

Whilst it wasn't quite was I was expecting, this was a good session and introduced me to some new tools that I'll be looking to use on future projects.

Beyond REST with GraphQL in .NET Core

After lunch it was on to GraphQL with a talk from Irina Scurtu. I didn't know much about GraphQL going in, although it's obviously a bit of an industry buzzword and was touted in one of my Day 1 sessions as a potential solution to the problem of under- or over-fetching data that comes with REST APIs.

Irina started with a quick run through on REST APIs - or, more specifically how we are abusing the term to mean "APIs that return JSON" and not correctly using status codes, content negotiation, HATEOAS, etc. I was surprised by the number of people in the room who owned up to being in this category. For me, this flagged up that before we look for solutions to the "problems" that implementing REST APIs give us, we should be careful to ensure that the problems aren't self inflicted.

She then looked specifically at the over- and under-fetching problem, and moved onto GraphQL with a brief history and overview of what it is, and isn't. The initial point is that it completely abandons REST semantics, which is hardly surprising but has some knock on effects - for example, since data retrieval is now done via POST requests, you can no longer take advantage of standard HTTP caching mechanisms.

After a run through of the core concepts of GraphQL - Queries, Mutations and Subscriptions, Irina got down to some demos using ASP .NET Core and various UIs, showing how to build a GraphQL enabled web app, define types, execute queries and so on. She also spent some time talking about when you should and shouldn't consider using it. The interesting point here was that a good reason for using it is when you know less about what consumers of your API will need from it. This suggests that if you have control (or at least, a strong knowledge) of your consumers, you are better building truly RESTful APIs that meet their needs - even if you end up building a UI-specific REST API over the top of your more general purpose APIs.

Whilst I came away with way more questions than I went in with, this was a really useful introduction to the basics of GraphQL.

Turbocharged: Writing High-Performance C# and .NET Code

Performance is something I've long been interested in - in fact it was the subject of my very first blog post. I've been meaning to look at some of the new performance-oriented constructs from .NET Core for a while now, so going to this talk by Steve Gordon was a no-brainer for me.

https://twitter.com/quorralyne/status/1222918680222359556

It opened with a look at some of the aspects of performance - execution time, throughput and memory usage - the latter because despite memory allocation being fast, we pay the price in garbage collections which can negatively impact performance due to the way the GC works. Steve also reinforced the point that when it comes to performance tuning, everything is contextual and warned against the dangers of over optimising code at the expense of readability or maintainability.

The next section of the talk covered the importance of approaching performance work scientifically, ensuring that you take one step at a time and measure the impact of each change. This is critically important as making multiple changes or not taking measurements will mean you're essentially working blindfolded with no real idea of what the impact of your changes are. To this end, he introduced Benchmark.NET, a tool some of the endjin team have used in their recent work but not one that I've ever used in anger.

Steve then moved on to a more detailed look at some of the new features, Span<T>, Memory<T>, ArrayPool and the System.IO.Pipelines namespace, finishing up with a look at the new JSON serialization capabilities available in System.Text.Json.

His talk was well organised, with examples of before and after code and the corresponding benchmarks, reinforcing the message that performance gains often come at the price of increased complexity in the code and continually bringing us back to the point that the need for performance should always be considered in context.

The final section of the talk was excellent, bringing the discussion back onto the business value of performance gains. This section was oriented around getting business buy in to do performance work, something I consider to be critical - but in the sense that all of our work has to be considered in terms of the business value it brings. There's no point spending weeks on performance optimisations if it's only going to save a few hundred pounds a year. On the other hand (and this is something that wasn't mentioned in the talk, but ties into what I knew my endjin colleagues would be talking about in their session on day 3) when you are considering using serverless technology at high scale, small improvements can result in huge cost savings. As Steve said at the beginning of the talk - everything depends on the context, and the important thing is to be scientific and measure everything.

Single Page Architectures with VueJS and ASP.NET Core

This talk was a bit of a punt on my part, and ended up being a little more basic than I'd hoped for. I went because whilst I've seen various examples of ways to use VueJS, I haven't seen many ASP.NET Core examples that I've really liked, so I thought it was worth getting another perspective. This was provided by Kevin Griffin, who gave a bit of ASP.NET history going back as far as WebForms (way to make me feel old) and then did the same for SPAs. He then took a look at the basic architecture of an SPA, and I was encouraged by his statement that really a front end SPA in it's supporting back-end API should be considered as two separate applications. I've worked on quite a few projects in the past where we've treated them as one but my current view is, where possible, deploying your SPA as a static site (served out of something like Azure Blob Storage with Static website hosting enabled) and then hosting the API using a serverless option such as Azure functions is a great way to go. I recently found that this approach has it's own buzzword: Jamstack.

Kevin ran through the steps to get the Vue and ASP.NET Core applications up and running using the CLI tools for each, and then showed some ways in which the two could be brought back together to appear as a single application, despite being separate. As I mentioned, my current view is that they should stay separate but I'm sure there are scenarios where it would be beneficial to deploy them as a single unit. When doing this, there are two scenarios to deal with. When running in production, things are relatively simple; you can simply use whatever tools you're running for the Vue side (e.g. Webpack) to transpile and package your Vue app directly into the wwwroot folder of the ASP.NET Core app. When running on a developer machine, the easiest way to make things behave in the same way is to proxy requests for one of the "apps" (i.e. ASP.NET Core or VueJS) through the other.

What I wasn't aware of prior to this talk was that there's some ASP.NET Core Middleware specifically for this purpose, in the shape of the Microsoft.AspNetCore.SpaServices.Extensions assembly. This provides various other tools for Angular, React and so on, as well as the Proxying middleware Kevin showed us here. This was useful to learn.

Kevin continued by getting into some of the detail of making API calls using Axios, Vuejs state management using Vuex and finished with some options for authentication. All in all, I didn't learn a huge amount from this session but I thought it was well structured and delivered for those who had less prior knowledge of the subject areas.

Summary

By the end of Day 2 I was getting a little jaded from all the discussions of UI, but I think the sessions have, on balance, been more immediately useful than yesterday's.  I'm looking forward to the final day, especially since endjin's very own Jess Panni and Carmel Eve are talking about a hugely exciting project we worked on last year - it'll be well worth a listen.

Jonathan George

Software Engineer IV

Jonathan George

Jon is an experienced project lead and architect who has spent nearly 20 years delivering industry-leading solutions for clients across multiple industries including oil and gas, retail, financial services and healthcare. At endjin, he helps clients take advantage of the huge opportunities presented by cloud technologies to better understand and grow their businesses.