C# serialization with JsonSchema and System.Text.Json
C# serialization with JSON Schema and System.Text.Json
Do you want a more efficient way to build System.Text.Json
-based APIs that shred, map, merge, filter, compose, and otherwise process and validate JSON data from various sources, using idiomatic dotnet types and JSON schema? OK - read on...
JSON and dotnet
System.Text.Json
introduced a new way of interacting with JSON documents in dotnet. It has a layered model, with low-allocation readers and writers underpinning a serialization framework with comparable functionality to the venerable (and battle-hardened) Newtonsoft JSON.NET.
One feature it doesn't currently support is JSON schema validation.
There are excellent third-party solutions to this - from Newtonsoft's paid-for extension to JSON.NET to Greg Dennis's Json-Everything which has great validation support over System.Text.Json
.
There are also validators with code-generation for dotnet types from JSON schema, including Rico Suter's NJsonSchema.
These all offer excellent capabilities, and good performance, in a variety of common scenarios, but when we went back to look at how we use JSON and JSON schema in our solutions, we discovered a mismatch.
Open API and service implementations
We typically model our API using the Open API specification, using common ReST-and-HTTP-friendly patterns for e.g. paging/continuations, resources, and hyperlinking. These are expected to be consumed by a variety of different clients.
This then needs to be translated into a "back end" service implementation. In an ideal world, we would have an idiomatic type model in our implementation platform of choice (in this case dotnet) that our developers can use to implement that service.
Those implementations follow a common pattern - they call into one or more other 1st and 3rd-party HTTP-based APIs (especially storage), variously shredding, combining, and validating the (typically JSON) data they retrieve.
Doing this involves a lot of serialization, deserialization, validation, and data copying. In fact, a large amount of the code (and compute time) in our applications is taken with this shredding, mapping, and copying.
And a lot of that is harder than it needs to be. Not least creating the dotnet types that match our JSON schema.
Not all type systems are alike
It is common for APIs to built "code first" and then use a tool like Rico's NSwag to generate Open API documentation from the implementation.
This is good as far as it goes, but we prefer an "Open API-first" approach.
There are a few reasons for this:
- It allows us to collaborate on API definitions in a document focused on just that
- It provides a common language for all collaborators, regardless of preferred implementation language (typically Typescript in the frontend and C# in the backend for us)
- It provides richer, more declarative constraints than you can cleanly express with 'code first' (we'll see some examples of this later)
- It can be consumed by other tooling without requiring a build (e.g. to stand up test data, generate interactive documentation).
But that is also where the problems begin.
The C# type system is not quite like JSON schema. JSON schema is more like a "duck-typing" model. It describes the "shape" of the document with statements like
- "it must look like this or like this or like this" (anyOf/oneOf)
- "if it looks like this, then it will also look like that, otherwise it looks like this other thing" (if/then/else)
- "if it has one of these properties, it must look like this" (dependentSchemas)
- "it must be a number or an object" (array of type)
This lends itself well to modelling in languages like Typescript which natively support union types, but slightly less so in C#. You have to do a bit of work.
It also has an interesting mix of constraints that you can generate statically at compile time, and constraints you have to resolve at runtime.
For example, if you use the allOf constraint, you can reason about all of the schema in the array, and, for example, create a type with the union of the properties represented by each of those types.
Even this is apparently simple statement is challenging. It is possible to write schema that layer constraints on e.g. the same property in different schema, and you must have a strategy to reduce those too.
But for an if constraint you will have to validate the if
schema against the object, and then you know that the instance can be represented in the form defined by the corresponding then
, or else
schemas - this can only be done at runtime.
Let's look at this little excerpt from the Open API metaschema, as it illustrates this quite nicely.
"path-item-or-reference": {
"if": {
"required": [
"$ref"
]
},
"then": {
"$ref": "#/$defs/reference"
},
"else": {
"$ref": "#/$defs/path-item"
}
}
This is a beautifully expressive piece of JSON schema, in my opinion. It says that "if the instance has a property called $ref
, then this must validate against the reference
schema. Otherwise, it must validate against the path-item
schema.
It allows us to represent a path-item
as either an instance of a path-item
, or a reference
to an instance of a path-item
. This is a very common pattern in hypermedia APIs.
But it doesn't translate well into dotnet types - and it is certainly not going to be generated from dotnet types by a tool like NSwag. (At least, not without considerably more effort than anyone has been prepared to put in so far!)
But we want a high-fidelity dotnet type model to manipulate the document in a convenient way. Something logically like this:
if path-item-or-reference is-a reference
// ... use as a reference
else if path-item-or-reference is-a path-item
// ... use as a path-item
How do we achieve this?
Serialization vs. Accessors
The traditional approach to jumping the gap between the world of JSON documents, and C# code is serialization. This involves mapping from the document into concrete C# types. It is essentially a "copy-and-transform" model where we throw away the original JSON representation in the process, and start working with a C# model instead.
It has the considerable advantage of simplicity for the end user. But there are a couple of issues.
- It intrinsically creates a copy of the underlying data
- It loses the original fidelity of the underlying data
The first is a barrier to performance and scale. The second is a barrier to implementing the support for the richness of JSON schema we talked about above.
The copying inherent in serialization is also a frequently unnecessary overhead. A very common pattern in API implementation is to take an input document, filter out some values, aggregate it with some other information from elsewhere, and then write it all back out again into some other API (often a storage service).
The vast majority of the information is essentially copied, unchanged, from source to destination.
And yet we have deserialized it, made further copies of it, and serialized it again, just to get it from A to B.
That is clearly wasted effort... except where it is essential because we do need to inspect parts of it to perform our business logic transformations.
So, we typically pay the cost of serialization of the entire thing, in order to inspect much smaller parts of it.
But there is another option, which still gives us rich dotnet types, but without excessive copying. And that is to create an accessor and builder model.
JSON primitives
JSON defines a small number of value primitives:
object
- a map of strings to any value primitivesarray
- an ordered collection of value primitivesstring
- a utf8-encoded string valuenumber
- a numeric value (withinteger
as a pseudo-primitive for numbers wherefloor(number) == number
)boolean
-true
orfalse
(and nothing else! no truthy 1s and 0s)null
- a null value
Ultimately, any information in a JSON document can be defined in terms of these primitives. And, conveniently, these map more-or-less nicely into dotnet primitives. I say more-or-less because they are not quite the same things (there is nothing quite like number
in dotnet, for example; but near enough!)
JsonElement
System.Text.Json
offers one such mapping into dotnet. It has a low-level accessor model over these primitives with the JsonElement
type. It has a ValueKind
property which tells you which of these primitives it represents (along with Undefined
for not-there-at-all; as distinct from there-but-null).
I say accessor model, because JsonElement
itself is a readonly struct
which is backed by the actual underlying memory representing the JSON document being read. It doesn't "deserialize" the underlying data in the traditional sense. Instead, it presents a window onto it. You can, on demand, access the idiomatic dotnet representations of these primitives with accessors like TryGetBoolean()
, and GetString()
which return dotnet types like bool
and string
, but until then they are just an underlying span of utf8-encoded bytes.
When you copy a JsonElement
, it doesn't copy that underlying memory - it is more like a reference to it. And being a readonly struct
we get the advantages of both stack allocation, and we avoid unnecessary copying when passing into methods, or when stored as a readonly
field in another readonly struct
.
This makes them highly efficient. But very generic.
We could imagine a richer family of types like JsonElement
, corresponding to each of the JSON primitives.
JsonArray
JsonObject
JsonString
JsonNumber
JsonBoolean
JsonNull
If you create these as readonly struct
wrappers for JsonElement
, with implicit conversions to string
, double
, int
, bool
, etc, equality operators, and the ability to EnumerateObject()
properties, or EnumerateArray()
items as appropriate, you now have strongly typed entities over the JSON primitives, that you can use (more or less) exactly like your dotnet primitives.
Formatting primitives
Why doesn't System.Text.Json
provide those primitives? Because, as it stands, it has no way of knowing exactly which types to provide; you need to know something about the structure of the document in advance to be able to do that.
JSON schema is what gives us the ability to compose these primitives into higher level structures.
Most obviously, it gives us the type constraint. This allows us to determine ahead of time which primitive type or types we are expecting. (As it turns out, we can usefully infer other "implicit" type information from the presence of other constraints, but type
is the simplest.)
We might then look at the format
constraint which allows us to constrain to higher-level constructs like date
, duration
, email
addresses, uuid
s etc.
So, we create types for these well-known derived types in the same way. e.g.
JsonEmail
JsonUuid
JsonRelativeJsonPointer
JsonRegex
JsonDateTime
As with the primitives, these offer conversions to common dotnet types (like DateTimeOffset
, Regex
and Guid
), bridging from the JSON world into dotnet, but that is not all. We also need a way to compose these types.
Composition and code generation
Json schema defines a number of ways to compose entities, defined in the in-place applicator and child applicator sections of the Core schema.
If we apply these rules, we can code generate dotnet types that represent the structure of the schema, with strongly typed accessors for properties, array elements etc.
Here's another piece of the Open API metaschema.
"reference": {
"type": "object",
"properties": {
"$ref": {
"$ref": "#/$defs/uri"
},
"summary": {
"type": "string"
},
"description": {
"type": "string"
}
}
This is the definition of the reference
schema referred to in the first example. It defines a schema with properties called summary
, and description
, which are simple strings, and $ref
which is itself a URI - defined as a string with a format of URI.
"uri": {
"type": "string",
"format": "uri"
}
As you might expect, when we code generate the type for reference
we create an object with dotnet property accessors for the properties discovered in the schema.
public readonly struct Reference
{
public JsonUri Ref
{
get
{
// ...
}
}
public JsonString Summary
{
get
{
// ...
}
}
public JsonString Description
{
get
{
// ...
}
}
}
To all intents and purposes, it looks like a regular dotnet type, just as you would have defined for serialization. But the implementations of those accessors return instances of types which are just ephemeral wrappers for the underlying JsonElement
, which, as we have already seen, is an ephemeral wrapper for the underlying memory.
So we avoid expensive conversions from the raw UTF8 text into concrete dotnet types, until we absolutely have to.
Similar techniques are applied for property and array item enumeration.
Unions and conversions
But what about these more complex models? How can we represent the kinds of union type we saw with our path-item-or-reference. Let's remind ourselves about that schema.
"path-item-or-reference": {
"if": {
"required": [
"$ref"
]
},
"then": {
"$ref": "#/$defs/reference"
},
"else": {
"$ref": "#/$defs/path-item"
}
}
For this little snippet, we can see 4 distinct schema, and so we would build 4 readonly struct
types to model them.
PathItemOrReferenceEntity
- for the outerpath-item-or-reference
IfValue
for the inline schema definition of theif
constraintReferenceValue
for the#/$defs/reference
schema; we have seen that one above (for those of you wondering why we don't also need one for thethen
schema itself, it turns out that we can always reduce an otherwise empty reference like this to the referred type)PathItemValue
for the#/$defs/path-item
schema
Now - these types don't have any dotnet polymorphic/inheritance relationship between them (they are readonly struct
which precludes that), but we can still convert freely between them.
Why? How?
Well, as we have seen, they are all just 'views' over the same underlying JSON fragment, backed by a JsonElement
. So, if I have a PathItemOrReferenceValue
, I can construct an IfValue
, or a ReferenceValue
from its underlying JsonElement
.
So, in the code generation process we determine these could-be-a relationships. We add implicit conversions between these types, and also constructors from the underlying primitive types. We end up with code like this, which gives us chains of implicit and explicit conversion, without any formal "inheritance" relationship between the types.
public static implicit operator PathItemOrReferenceEntity(JsonObject value)
{
return new PathItemOrReferenceEntity(value);
}
public static implicit operator JsonObject(PathItemOrReferenceEntity value)
{
return value.AsObject;
}
And because (at the risk of repeating myself) these are immutable, stack allocated values, that is an inexpensive operation. So the fact that the PathItemOrReference
is effectively a (discriminated) union of ReferenceValue
or PathItemValue
comes more or less for free... providing, of course, we can discriminate which it should be.
Validation
And that is where JSON schema validation comes in.
While any type of this kind can, in practice, be converted to any other type, you don't always want to do that. You want to ensure that the conversion is valid.
So, we generate code to implement the schema validation rules that apply to this entity. Rather than a general purpose validator, the generated Validate()
method can embody only those rules that apply in place. This allows it to be extremely efficient.
We add additional accessors that use validation to help us determine what type conversions are applicable given the data.
public bool IsIfMatchReferenceValue
{
get
{
return this.As<Menes.OpenApi.Document.PathItemOrReferenceEntity.IfEntity>().IsValid();
}
}
And we can then use those in client code.
// Resolve the reference, or use the embedded value
if (pathItemOrReference.IsIfMatchReferenceValue)
{
ReferenceValue refVal = pathItemOrReference;
// ...
}
else
{
PathItemValue pathItem = pathItemOrReference;
// ...
}
The implicit conversions take care of the type adaptation, and the underlying JsonElement
avoids excessive copying.
You may also want to avoid the intermediate assignment, or some brackety casting, in which case we also generate accessor properties for this kind of union. This is all about "convenience of use".
// Resolve the reference, or use the embedded value
if (pathItemOrReference.IsIfMatchReferenceValue)
{
DoSomethingToARef(pathItemOrReference.AsReferenceValue.Ref);
}
else
{
DoSomethingToAPathItemServers(pathItemOrReference.AsPathItemValue.Servers);
}
There are trade-offs in this approach. Because our types are immutable, we can't automatically cache the results of such validation-based "reflection". The caller would need to stash away the result of that IsIfMatchReferenceValue
test to avoid multiply-evaluating the value. But in most cases, the cost of evaluation is low, and the ease of use benefits are considerable.
Building documents
But that's only half the equation. The other part of the process is building new documents.
Let's work from an example. Imagine we have a Person service that returns basic personal information for someone - their name, for example. And a PersonAddress service that provides the address history for a person.
Now, some UI needs us to provide a service that gives us a person's current name and address.
Let's leave aside the optimisations we can apply in terms of caching, denormalization, projections, or the complexities of sharing identity between services, hypermedia, security and all that important stuff.
Let's pretend we are going to implement this by providing an API that returns something of this shape:
{
"type": "object",
"properties": {
"primaryName": { "$ref": "#/$defs/personName" },
"address": { "$ref": "#/$defs/address" }
},
"additionalProperties": false,
"$defs": {
"personName": {
"type": "object",
"properties": {
"firstName": { "type": "string" },
"lastName": { "type": "string" }
},
"additionalProperties": false
},
"address": {
"type": "object",
"properties": {
"line1": { "type": "string" },
"line2": { "type": "string" },
"line3": { "type": "string" },
"line4": { "type": "string" },
"postalCode": { "type": "string" },
"country": { "type": "string" }
},
"additionalProperties": false
}
}
}
So, the Person API we are using returns us entities that look like this:
{
"type": "object",
"properties": {
"primaryName": { "$ref": "#/$defs/personName" },
},
"additionalProperties": false,
"$defs": {
"personName": {
"type": "object",
"required": ["lastName"],
"properties": {
"firstName": { "type": "string" },
"middleName": { "type": "string" },
"lastName": { "type": "string" }
},
"additionalProperties": false
}
}
}
And the Address service returns us entities that look like this:
{
"type": "object",
"properties": {
"addressHistory": {
"type": "array",
"items": { "$ref": "#/$defs/address" }
}
},
"additionalProperties": false,
"$defs": {
"address": {
"type": "object",
"properties": {
"line1": { "type": "string" },
"line2": { "type": "string" },
"townOrCity": { "type": "string" },
"region": { "type": "string" },
"postalCode": { "type": "string" },
},
"additionalProperties": false
}
}
}
To generate our output, we need to smoosh together the person information with the address information, to generate our result. This is pretty typical of real-world services.
How do we do this? More unions!
The types we generate are actually backed by either a JsonElement
or a dotnet type capable of representing the JSON primitive(s) that underly it. Let's look at a stripped down version of a type which can be represented as an object
for example.
public readonly struct SomeJsonObject
{
private readonly JsonElement jsonElement;
private readonly ImmutableDictionary<string, JsonAny>? properties;
public SomeJsonObject(JsonElement jsonElement)
{
this.jsonElement = jsonElement;
this.properties = default;
}
public SomeJsonObject(ImmutableDictionary<string, JsonAny> properties)
{
this.jsonElement = default;
this.properties = properties;
}
/// ...
You can see that this can be backed by either a JsonElement
or an immutable dictionary of string property names mapped to a type called JsonAny
(which is a type of the kind we've been describing capable of representing any JSON primitive).
We also generate a handy factory method for an object so you can build it from its properties.
Here's the example of the Create()
factory method we generate from the schema for address
in the address API above.
public static AddressValue Create(Menes.Json.JsonString? line1 = null, Menes.Json.JsonString? line2 = null, Menes.Json.JsonString? townOrCity = null, Menes.Json.JsonString? region = null, Menes.Json.JsonString? postalCode = null)
{
var builder = ImmutableDictionary.CreateBuilder<string, JsonAny>();
if (line1 is Menes.Json.JsonString line1__)
{
builder.Add(Line1JsonPropertyName, line1__);
}
if (line2 is Menes.Json.JsonString line2__)
{
builder.Add(Line2JsonPropertyName, line2__);
}
if (townOrCity is Menes.Json.JsonString townOrCity__)
{
builder.Add(TownOrCityJsonPropertyName, townOrCity__);
}
if (region is Menes.Json.JsonString region__)
{
builder.Add(RegionJsonPropertyName, region__);
}
if (postalCode is Menes.Json.JsonString postalCode__)
{
builder.Add(PostalCodeJsonPropertyName, postalCode__);
}
return builder.ToImmutable();
}
Notice how the properties are all optional, so we provide default nulls, and only add the properties if present. (Note that if the property schema allowed the null
type, then you could pass an instance of a type called JsonNull
to explicitly set a null value - it allows us to maintain that distinction between "not present" and "present but null" that we talked about earlier.)
But we also support the required
constraint. In which case we generate a Create()
method like this one (from the Person API's personName
schema).
public static PersonNameValue Create(Menes.Json.JsonString lastName, Menes.Json.JsonString? firstName = null, Menes.Json.JsonString? middleName = null)
{
var builder = ImmutableDictionary.CreateBuilder<string, JsonAny>();
builder.Add(LastNameJsonPropertyName, lastName);
if (firstName is Menes.Json.JsonString firstName__)
{
builder.Add(FirstNameJsonPropertyName, firstName__);
}
if (middleName is Menes.Json.JsonString middleName__)
{
builder.Add(MiddleNameJsonPropertyName, middleName__);
}
return builder.ToImmutable();
}
This helps us fall into a pit-of-success. We're much more likely to create valid documents using these helpers.
So - let's call the Person API and get a person back.
{
"primaryName": {
"firstName": "Jonathan",
"lastName": "Small"
}
}
And call the PersonAddress API and get an address history back
{
"addressHistory": [
{
"line1": "32 Andaman Street",
"townOrCity": "London",
"postalCode": "SE1 3JS"
},
{
"line1": "Wisteria Lodge",
"line2": "32, Norwood Street",
"townOrCity": "London",
"postalCode": "SE3 5JB"
}
]
}
We have various helpers to let us convert to our types from Utf8JsonReader
, strings, sequence or buffers of bytes, etc. For demonstration purposes, we'll just parse strings.
PersonEntity person = JsonAny.Parse(@"{
""primaryName"": {
""firstName"": ""Jonathan"",
""lastName"": ""Small""
}
}
");
AddressHistoryEntity addressHistory = JsonAny.Parse(@"{
""addressHistory"": [
{
""line1"": ""32 Andaman Street"",
""townOrCity"": ""London"",
""postalCode"": ""SE1 3JS""
},
{
""line1"": ""Wisteria Lodge"",
""line2"": ""32, Norwood Street"",
""townOrCity"": ""London"",
""postalCode"": ""SE3 5JB""
}
]
}");
So, now we have two entities that are backed by JsonElement
s.
For our response, we want to smoosh those together - pick out the relevant address and name, and construct the result.
AddressHistoryEntity.AddressValue address = addressHistory.AddressHistory.EnumerateItems().FirstOrDefault();
var personDetails = PersonDetailsEntity.Create(
primaryName: person.PrimaryName.As<PersonDetailsEntity.PersonNameValue>(),
address: address.IsNotNullOrUndefined() ?
PersonDetailsEntity.AddressValue.Create(
line1: address.Line1.AsOptional(),
line2: address.Line2.AsOptional(),
line3: address.TownOrCity.AsOptional(),
line4: address.Region.AsOptional(),
postalCode: address.PostalCode.AsOptional()) : null);
Notice that we can just use our cast method As<T>()
to convert the name value from one type to the other because we know the schema are compatible. And we map the address using the Create()
function.
But what has this done under the covers?
First, we can see that our PersonDetails
object is using the object backing, not the JsonElement
.
And if we look at the address
element, that is also using the object backing.
But those individual address properties are all using the original JsonElement
backings - we have avoided making copies of those values.
And the primaryName
element as a whole is using the JsonElement
backing - we have completely avoided creating a copy of that.
This is exactly what we were looking for - interoperability between types generated and used in entirely different schema, with a minimum of copying of the underlying data.
Performance vs. Usability
Now, clearly this approach is a trade-off between performance and usability. We seek the usability of "serialization to and from C# types" with something close to the performance of working directly over the underlying data.
Most often, you will read the entire payload into a JsonDocument
in memory, and then start operating over the in-memory copy, just as with serialization.
But in the typical case, we avoid making copies of the vast majority of that data, and end up streaming it straight back out on the output side. Even in this tiny, rather contrived example, we allocate very few new instances, and those we do create are stack allocated.
With a little more work, you can do even better.
If, for example, you had a large result set of comparatively small objects represented as an array (another fairly common scenario), you could read each entity in the array in turn directly from the UTF8 reader, and construct one of these strongly typed entities over that portion of the buffer, avoiding holding the whole (perhaps enormous...perhaps unending...) array in memory.
We also give you utility functions for setting and clearing properties, adding and removing items from collections, etc (which return modified copies of these immutable types).
Again, these trade a little bit of performance for more familiar usage patterns, but you only pay the performance hit if you choose to use them.
Compatiblity and Benchmarking
Our validation passes all the 2,500-and-odd tests in the JSON Schema Test Suite for draft2019-09 and draft2020-12, including most of the optional tests.
We also generate micro-benchmarks for all of those tests comparing against Newtonsoft schema validation, and are comparable (or better!) for almost all cases, though we haven't started aggressive micro-optimisation work as yet. It is notable that we are more performant the more complex the scenario becomes. More significantly, our approach typically allocates better than 10x less memory than the Newtonsoft code. These are fairly typical results from the overall result set:
Benchmark Group | Benchmark | Validator | Time | GC | Allocated |
---|---|---|---|---|---|
UniqueItemsValidation | Benchmark0 | ValidateMenes | 1,492.1 ns | 0.0095 | 208 B |
UniqueItemsValidation | Benchmark0 | ValidateNewtonsoft | 1,380.4 ns | 0.2117 | 3984 B |
UniqueItemsValidation | Benchmark1 | ValidateMenes | 1,389.9 ns | 0.0095 | 208 B |
UniqueItemsValidation | Benchmark1 | ValidateNewtonsoft | 1,677.2 ns | 0.2289 | 4312 B |
UniqueItemsValidation | Benchmark10 | ValidateMenes | 2,088.4 ns | 0.0153 | 320 B |
UniqueItemsValidation | Benchmark10 | ValidateNewtonsoft | 2,597.0 ns | 0.2861 | 5368 B |
UniqueItemsValidation | Benchmark11 | ValidateMenes | 1,221.1 ns | 0.0095 | 208 B |
UniqueItemsValidation | Benchmark11 | ValidateNewtonsoft | 1,317.8 ns | 0.2117 | 3984 B |
UniqueItemsValidation | Benchmark12 | ValidateMenes | 1,218.5 ns | 0.0095 | 208 B |
UniqueItemsValidation | Benchmark12 | ValidateNewtonsoft | 1,317.5 ns | 0.2117 | 3984 B |
UniqueItemsValidation | Benchmark13 | ValidateMenes | 2,059.7 ns | 0.0114 | 272 B |
UniqueItemsValidation | Benchmark13 | ValidateNewtonsoft | 2,744.0 ns | 0.2708 | 5072 B |
UniqueItemsValidation | Benchmark14 | ValidateMenes | 1,994.5 ns | 0.0114 | 272 B |
UniqueItemsValidation | Benchmark14 | ValidateNewtonsoft | 2,480.1 ns | 0.2708 | 5072 B |
UniqueItemsValidation | Benchmark15 | ValidateMenes | 2,949.0 ns | 0.0191 | 416 B |
UniqueItemsValidation | Benchmark15 | ValidateNewtonsoft | 4,820.4 ns | 0.3433 | 6528 B |
UniqueItemsValidation | Benchmark16 | ValidateMenes | 3,043.5 ns | 0.0191 | 416 B |
UniqueItemsValidation | Benchmark16 | ValidateNewtonsoft | 4,375.0 ns | 0.3433 | 6528 B |
UniqueItemsValidation | Benchmark17 | ValidateMenes | 4,180.4 ns | 0.0153 | 416 B |
UniqueItemsValidation | Benchmark17 | ValidateNewtonsoft | 3,630.7 ns | 0.3357 | 6280 B |
UniqueItemsValidation | Benchmark18 | ValidateMenes | 2,827.1 ns | 0.0191 | 416 B |
UniqueItemsValidation | Benchmark18 | ValidateNewtonsoft | 3,974.1 ns | 0.3586 | 6800 B |
You can examine teh data in more detail in an example benchmark run in Excel format.
Feel free to have a look at the prototype code which is sitting on a branch in github. We'd love to get some feedback on where to go from here. (Clue: we're going to implement OpenApi service generation.)
As an aside - one nice feature of our implementation is that it is self-bootstrapping. We use our code generation to generate the Menes.Json.Schema types from the JSON metaschema itself. I find this to be excessively satisfying.