Skip to content
Matthew Adams By Matthew Adams Co-Founder
How .NET 8.0 boosted JSON Schema performance by 20%

At endjin, we maintain Corvus.JsonSchema, an open source high-performance library for serialization and validation of JSON using JSON Schema.

Its first release was on .NET 7.0, and its performance was pretty impressive. Ian Griffiths has given a number of talks on the techniques it uses to achieve its performance goals.

Since then, the .NET 8.0 runtime has shipped, and with no code changes at all, we get a "free" performance boost of ~20%!

We have also released a .NET 8.0 version (v2.0+) which takes advantage of .NET 8.0 features for more performance gains - and also paves the way for future gains.

In this post, we are going to take a look at some benchmarks that compare the .NET 7.0 build running against both .NET 7.0 and .NET 8.0, and then compare with the latest build running on .NET 8.0.

Finally, we'll take a look into the future, and where there may be further opportunities for improvement.

First, here's a graph showing JSON Schema performance across our core benchmarks from .NET 7 through to .NET 8 (and a look ahead to what could be done in .NET 9).

The Json Schema Performance Graph shows the benchmark results we will examine in the section below.

Let's explore those benchmarks in more detail.

Benchmark results

We run 3 different benchmarks on an Intel Core i7-13800H.

  1. Validating a small JSON document. This is typical of a small JSON payload in a web API. It includes some strings, some formatted strings (e.g. email, date), and some numeric values.
{
    "name": {
        "familyName": "Oldroyd",
        "givenName": "Michael",
        "otherNames": [],
        "email": "michael.oldryoyd@contoso.com"
    },
    "dateOfBirth": "1944-07-14",
    "netWorth": 1234567890.1234567891,
    "height": 1.8
}
  1. Validating a large array consisting of an array of 10,000 of these small documents.

  2. Validating a large array of this kind, but collecting JSON Schema annotations as we go.

First, validating the small document:

Method Runtime Mean Error StdDev Allocated
Small .NET 7.0 1.344 us 0.0059 us 0.0055 us -
Small .NET 8.0 1.095 us 0.0227 us 0.0651 us -

For free, .NET 8.0 runs the same code 18% faster.

Next, it's the large document, with simple validation. We have a small allocation overhead here. Which, interestingly, is 1B less on .NET 8.0

Method Runtime Mean Error StdDev Gen0 Allocated
Large .NET 7.0 13.72 ms 0.053 ms 0.050 ms - 13 B
Large .NET 8.0 10.22 ms 0.180 ms 0.234 ms - 12 B

Again, with no code changes, we get a 25% performance improvement.

Lastly, the large document with annotation collection. You'll notice that there is some allocation to store all the annotations!

Method Runtime Mean Error StdDev Gen0 Gen1 Allocated
Large w/ collection .NET 7.0 15.09 ms 0.066 ms 0.059 ms 62.5 - 937.51 KB
Large w/ collection .NET 8.0 11.46 ms 0.065 ms 0.055 ms 62.5 - 937.51 KB

In this case, .NET 8.0 runs the same code 24% faster.

In our latest .NET 8.0 build, we see identical memory allocation characteristics on all benchmarks, and comparable performance on our two large array documents.

Method Runtime Mean Error StdDev Gen0 Allocated
Large .NET 8.0 10.22 ms 0.180 ms 0.234 ms - 12 B
Large .NET 8 .NET 8.0 10.11 ms 0.032 ms 0.026 ms - 12 B
Method Runtime Mean Error StdDev Gen0 Gen1 Allocated
Large w/ collection .NET 8.0 11.46 ms 0.065 ms 0.055 ms 62.5 - 937.51 KB
Large w/ collection .NET 8 .NET 8.0 11.34 ms 0.040 ms 0.034 ms 62.5 - 937.51 KB

Interestingly, however, we see another 10% performance boost on our small document.

Method Runtime Mean Error StdDev Allocated
Small .NET 8.0 1095 ns 22.70 ns 0.0651 us -
Small .NET 8 .NET 8.0 971 ns 3.22 ns 2.86 ns -

Is this the end of the free lunch?

These performance gains are often seen as "free" - and in a sense, they are. But one reason we see such a huge benefit in these libraries is because we have coded them to focus heavily on reducing allocation and reducing copies (and, where possible while maintaining some degree of readability, reducing branching code in our inner loops).

Not allocating memory and not copying it if you can avoid it turn out to be the secret sauce behind a lot of performance - once you've picked a sensible algorithm for the job. And in real software, avoiding garbage collections turns out to have a measurable knock-on effect on the system as a whole, even if it looks like small potatoes in the context of any micro-benchmark of some subsystem.

And this coding style is heavily rewarded by the kinds of optimizations that have been rolling out in the .NET runtime over the past few releases.

So, we don't think it is the end of the free lunch - in fact, there are still a whole heap (pun only intended retrospectively) more improvements that can be made to .NET in the 9.0 timeframe to continue this trend.

However, there are also things that can be done that require (minimal) code changes for quite significant benefits.

For example, we have an API proposal for System.Text.Json that, if implemented, would give us another 15% performance boost.

Method Mean
Small .NET 8 971 ns
Small vFuture 802 ns
Method Mean
Large .NET 8 10.110 ms
Large vFuture 8.498 ms
Method Mean
Large w/ collection .NET 8 11.340 ms
Large w/ collection vFuture 9.625 ms

(Feel free to go and comment on that GitHub Issue if you'd like to see this implemented!)

Matthew Adams

Co-Founder

Matthew Adams

Matthew was CTO of a venture-backed technology start-up in the UK & US for 10 years, and is now the co-founder of endjin, which provides technology strategy, experience and development services to its customers who are seeking to take advantage of Microsoft Azure and the Cloud.