Skip to content
Ian Griffiths By Ian Griffiths Technical Fellow I
C# 11.0 preview: parameter null checking

It's a little early to be stating which features will be shipping in C# 11.0. I'm still working on the finishing touches to my Programming C# 10.0 book; Microsoft hasn't even released a preview SDK offering C# 11.0 yet. However, there's one feature that looks likely to make it in. Microsoft just merged a hefty PR using the new !! operator.

The response to this on Twitter has been fascinating, because it has revealed that a lot of people have some quite odd ideas about nullable reference types. In this post I'm going to describe what this new language feature does, and how it relates to and is distinct from nullable reference types. This will reveal how some of the opinions being voiced are based on misunderstandings.

Parameter null checking

Firstly, what is this new language feature? It is new syntax for throwing ArgumentNullException. That is all.

This does not add any new capabilities to C#. On the contrary, the whole point of this feature is that it's for something we already do a lot. Something we do so much that having a more concise way of doing it would be handy. (Something the .NET runtime source code was apparently doing in at least 17,000 places!)

Here's the old-school way of rejecting null arguments:

public static void Greet(string name)
{
    if (name is null)
    {
        throw new ArgumentNullException(nameof(name));
    }

    Console.WriteLine("Hello, " + name);
}

or if you're able to target .NET 6.0 or later, you might write this slightly more succinct version:

public static void Greet(string name)
{
    ArgumentNullException.ThrowIfNull(name);

    Console.WriteLine("Hello, " + name);
}

I recently wrote a blog about this new ThrowIfNull helper in which I explain how it exploits the CallerArgumentExpression feature added to C# 10.0.

C# 11.0 adds an even more succinct way to express this:

public static void Greet(string name!!)
{
    Console.WriteLine("Hello, " + name);
}

And that's pretty much all there is to this new language feature. The Parameter Null Checking proposal goes into a lot of detail as language feature specifications generally have to, but the equivalence of these snippets above essentially tells you everything you need to know.

What the !! does this have to do with nullable reference types?

Not much.

First of all, you don't need to enable nullable reference types (NRTs) to use this. This makes sense, given that developers had been writing C# code that throws ArgumentNullException for nearly 20 years before NRTs came along. If you've got some C# code that doesn't use the NRT feature, and which throws ArgumentNullException in lots of places, you might well want to use this new syntax without being forced to adopt NRTs.

There are a couple of ways in which !! does interact with nullable reference types, but neither is especially profound. One is mentioned in the feature proposal I linked to above: it says that the nullability analysis will consider a !! parameter to be not null at the start of the method (even if the parameter type is nullable). This is entirely consistent with what would happen if you wrote the equivalent code without the feature, e.g.:

public static void Greet(string? name)
{
    if (name is null)
    {
        throw new ArgumentNullException(nameof(name));
    }

    // The C# compiler will have determined that name is not null here.
    Console.WriteLine("Hello, " + name);
}

So this isn't some special interaction between !! and NRTs. It's just an upshot of the fact that !! tells the compiler to emit something that has the same effect as the code shown above.

The other way in which !! and nullable reference types have a connection is that the compiler will warn you if you do something daft, such as this:

public static void Greet(string? message!!)
{
    Console.WriteLine("Hello, " + name);
}

That code will produce this compiler warning:

warning CS8995: Nullable type 'string?' is null-checked and will throw if null.

It's pointing out that this is a combination that doesn't seem to make sense. The method signature declares that null is an acceptable input, and yet you've instructed the compiler to generate code that will throw an exception if null is passed. So the implementation will behave in a way that is at odds with what the method signature appears to promise.

Again, this isn't particularly profound. The compiler doesn't currently produce a warning for the preceding example that also takes a string? and throws the exception in the conventional way. But you could imagine a code analyzer that did produce exactly such a warning, because both methods do the exact same strange thing. They just use different syntax to achieve it.

So why bring up nullable reference types?

Because people seem to be confused.

There is, I suppose, a third way in which !! and nullable reference types could be said to be related: they are both syntaxes in which we add punctuation marks to our code to say things relating to null. I'd call that sloppy thinking, and I don't believe analysis that shallow tends to produce useful insights. On the contrary, it seems to have produced some wrong-headed arguments.

Microsoft's David Fowler posted a link to the aforementioned !! PR on Twitter, and the resulting thread caused a bit of a stir. There were various objections, some aesthetic—some people find !! ugly—and some a bit deeper. In this post I'm going to focus on comments that fell into one of three categories, because they seem to be based on misunderstandings.

First, numerous people were complaining about exactly where this new syntax appears in the code. These complainants argue that instead of, say, void Greet(string message!!), the syntax should be something more like void Greet(string!! message) or perhaps void Greet(string! message) because, they (wrongly) argue, this feature makes a statement about the type.

Second, some people are arguing that this is only necessary because the whole nullable reference types feature has been incorrectly implemented.

Third, some people seem to think that !! forms part of the argument name.

Why argument 1 is wrong: this isn't about the type

Here are some examples of the first kind of argument:

Also, why would it be on the argument name and not on the type?

we have: method(Foo? foo)

and now with this: method(Foo foo!!)

Why?

and

We should not be adding language statements to variable names. The name ought not to indicate anything about the type, the type should.

and

this syntax leaves a lot to be desired - why is "!!" on the name and not the type?

These people seem to be under the impression that the effect of this new !! annotation is to say something about the parameter type. They are mistaken. !! doesn't make a statement about type, it's an instruction to the compiler to insert code that throws an exception if the argument is null. This syntax is no more a statement about type than manually writing the equivalent code (like we've been doing for the last two decades) is making a statement about the type. This is purely an implementation matter. So it would be actively wrong to put this annotation on the type.

The Introduction to Rx.NET 2nd Edition (2024) Book, by Ian Griffiths & Lee Campbell, is now available to download for FREE.

The !! annotation has zero impact on the signature of the method you apply it to. We can prove this by looking at what the compiler produces. This class shows all the combinations of nullability and parameter null checking:

public static class VariousNullHandling
{
    public static void NonNullableTypeNullRuntimeCheck(string message!!)
    {
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NonNullableTypeNullNoRuntimeCheck(string message)
    {
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullableTypeNullRuntimeCheck(string? message!!)
    {
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullableTypeNullNoRuntimeCheck(string? message)
    {
        Console.WriteLine(message ?? "null was passed");
    }

#nullable disable annotations
    public static void NullObliviousTypeNullRuntimeCheck(string message!!)
    {
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullObliviousTypeNoNullRuntimeCheck(string message)
    {
        Console.WriteLine(message ?? "null was passed");
    }
#nullable restore annotations
}

These all have the same body, which, as it happens, is perfectly able to cope with null. We then have 6 variants, 2 for each of the three flavours of nullability that C# recognizes. Those flavours are non-nullable, nullable, and null-oblivious. (And these are three different types.) For each of these I've written a pair of methods, one with, and one without the !!.

Here's the method signature as it appears in the IL for the first one (non nullable, no !!):

.method public hidebysig static void  NonNullableTypeNullRuntimeCheck(string message) cil managed
{
  .custom instance void System.Runtime.CompilerServices.NullableContextAttribute::.ctor(uint8) = ( 01 00 01 00 00 )

And here's the method signature as it appear in the IL for the second one (non nullable, with !!):

.method public hidebysig static void  NonNullableTypeNullNoRuntimeCheck(string message) cil managed
{
  .custom instance void System.Runtime.CompilerServices.NullableContextAttribute::.ctor(uint8) = ( 01 00 01 00 00 )

You'll notice that the only difference is the method name. The NullableContextAttribute is there because both methods were defined in a nullable annotation context, and, as I may have mentioned, this is unrelated to the !! syntax. What this demonstrates is that from the perspective of code consuming these methods, there is no difference in method signature. These methods have exactly the same parameter type. Not only is there no difference at the CLR type system level, there is no difference in the annotations that the C# compiler has added to support the nullability aspects of C#'s type system that aren't part of the CLR's type system.

Here's what the nullable ones look like:

.method public hidebysig static void  NullableTypeNullRuntimeCheck(string message) cil managed
{
  .custom instance void System.Runtime.CompilerServices.NullableContextAttribute::.ctor(uint8) = ( 01 00 02 00 00 )

and

.method public hidebysig static void  NullableTypeNullNoRuntimeCheck(string message) cil managed
{
  .custom instance void System.Runtime.CompilerServices.NullableContextAttribute::.ctor(uint8) = ( 01 00 02 00 00 )

Again, only the name is different. And finally, because none of this has anything to do with nullable reference types, and as the VariousNullHandling shows, we can also use !! in null-oblivious code. Here are the corresponding two method signatures:

.method public hidebysig static void  NullObliviousTypeNullRuntimeCheck(string message) cil managed

and

.method public hidebysig static void  NullObliviousTypeNoNullRuntimeCheck(string message) cil managed

Obviously those last two don't have the NullableContextAttribute, because they are null oblivious. And once again, they are identical apart from the method names.

Obviously the actual IL is different for each of these pairs of methods. And that's the point: the !! is all about the implementation. To clarify that, consider the fact that we have always been able to write code that behaves just like my VariousNullHandling class above, it was just more verbose. Here's how you could write code with the same behaviour today in C# 10.0:

public static class VariousNullHandlingOldStyle
{
    public static void NonNullableTypeNullRuntimeCheck(string message)
    {
        ArgumentNullException.ThrowIfNull(message);
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NonNullableTypeNullNoRuntimeCheck(string message)
    {
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullableTypeNullRuntimeCheck(string? message)
    {
        ArgumentNullException.ThrowIfNull(message);
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullableTypeNullNoRuntimeCheck(string? message)
    {
        Console.WriteLine(message ?? "null was passed");
    }

#nullable disable annotations
    public static void NullObliviousTypeNullRuntimeCheck(string message)
    {
        ArgumentNullException.ThrowIfNull(message);
        Console.WriteLine(message ?? "null was passed");
    }

    public static void NullObliviousTypeNoNullRuntimeCheck(string message)
    {
        Console.WriteLine(message ?? "null was passed");
    }
#nullable restore annotations
}

This shows the practical effect of the !! keyword. So now let's revisit those complaints from the twitter thread. This C# 10.0 example is exactly equivalent to the C# 11.0 version—its behaviour is the same, and so is the signature of each of the methods. So any complaints about method signatures and types that are applicable to the C# 11.0 version should also apply to the C# 10.0 version. Let's see how those complaints sound when applied to this example. I've marked the relevant modifications in bold:

Also, why would it be inside the method and not on the type?

we have: method(Foo? foo)

and now with this: method(Foo? foo) { ArgumentNullException.ThrowIfNull(foo); ...

Why?

and

We should not be adding language statements inside methods. The name ought not to indicate anything about internal implementation details, the type should.

and

this syntax leaves a lot to be desired - why is ArgumentNullException.ThrowIfNull inside the method and not the type?

Bear in mind that these reworded tweets mean exactly the same thing as the originals (although almost certainly not what the authors thought they were saying). All I've done is expand out the bits that were talking about !! to clarify what that actually means.

With this clarification, I think it's now obvious that these statements sound peculiar. I say mostly, because the middle one does still make one interesting point. I think it's obvious that "the type should" is a mistake: no, the type shouldn't say anything about an arbitrary implementation detail choice that has no impact on the caller. However, if you delete the first sentence and the last three words from that middle one, you do end up with a more reasonable complaint:

The name ought not to indicate anything about internal implementation details

That's still not quite right. The name hasn't actually changed—in void Greet(string name!!), the argument name is still name here, not name!!. But if I just make one more tweak:

The method declaration ought not to indicate anything about internal implementation details

I think this gets to the heart of a genuine problem that has confused a lot of people. These !! annotations appear in the method declaration even though they do not change the method signature. Ast this twitter thread has demonstrated, someone who wasn't paying close attention could easily misunderstand what the syntax actually meant. Moreover, an unconfused person can reasonably complain that there's something unsatisfactory about intermingling implementation details with the method signature.

Programming C# 10 Book, by Ian Griffiths, published by O'Reilly Media, is now available to buy.

Then again, C# does have form here. The async keyword has this exact same problem. It appears in the method signature, but it's actually a statement about an implementation detail. A library exposing a method that returns a Task might have a hand-coded asynchronous implementation (of the kind we had to write back in the days of C# 4.0) in one version, and then in a newer version it could be rewritten using async and await without changing the signature of the method in any way. It's just a shift in implementation strategy. (As it happens, you can tell whether a method was annotated with async by looking for the presence of the AsyncStateMachineAttribute. But critically, that attribute's presence has no bearing on either the CLR type system or C#'s type system. That makes it different from NullableContextAttribute—that also has no bearing on the CLR's type system, but its presence does change the target's type in the C# type system.) The new !! syntax looks similar to async: an annotation that can appear in the method declaration but which only affects the implementation, not the signature.

Nullability, and type systems

Before I move onto the second kind of misguided complaint, I want to talk briefly about the two type systems. You might have been surprised earlier when I talked about the C# type system and the CTS (the Common Type System—the CLR's type system). For a large part of .NET's history, these were effectively the same thing—the point of C# was that it was language designed from the start to be native to the CLR. Its type system was the CLR's type system.

That's no longer true. (Arguably it was never completely true, but for the first decade or so the differences were mostly down in the weeds.) One example of divergence is that C#'s type system now includes tuples, and although there is an isomorphism between C#'s model for tuples and the ValueTuple family of types, it's not completely straightforward. (Only this week, I've had to fix a bug where some of my code was working with tuples through the reflection API, which gives you a CLR's-eye view, and I hadn't quite correctly accounted for how that differs from C#'s perspective.) And of more immediate relevance, the nullable reference type system makes a distinction between string? and string, a distinction that the CLR simply doesn't recognize.

This is an important point. C# 8.0 extended the C# type system by defining multiple notions of nullability, with the goal of reducing the incidence of bugs resulting in a NullReferenceException. If we could be in a world where this new, extended type system were the only type system in existence, then conceivably, this new feature might have gone further: perhaps it could have eliminated bugs of this type. This might have required not just language support, but also runtime support, to deal with tricky cases such as arrays, but as the argument goes: Microsoft writes both the C# compiler and the runtime, so they could have made the necessary changes to both. However, this much more complex approach wouldn't have helped, because it would still have run into a problem: C# programs typically don't get to exist in such a world where all the code is nullable-aware. Code built against the new C# 8.0+ model typically has to share the runtime with code that does not recognize these new extensions.

What's the most common kind of .NET code that doesn't recognize these C# extensions to the type system? It is, ironically, C# code.

Nullable reference types weren't available on a version of .NET that enjoyed LTS (Long Term Support) status until the very end of 2019, and in practice it wasn't until the mid-2020s that you could depend on widespread support in the .NET ecosystem. (Even some of Microsoft's own Azure services such as Azure Functions took a long time to catch up with .NET Core 3.1.) So in practice this has been available for less than 2 years at this point. It takes a lot of work to update an old codebase to take advantage of nullable reference types, and for some projects, it might require breaking changes to do it properly. There won't always be a good justification for expending that effort, so there are a lot of libraries out there that are null-oblivious, and which are likely to stay that way for a long time, possibly indefinitely.

So the reality is that most C# code that uses nullable reference types has to coexist with code written against a type system that doesn't understand what NRTs are. This makes it impractical for nullable reference types to be watertight. The only way to guarantee that a variable declared as non-nullable will never contain null would be to maintain complete separation from the old type system—you'd never be able to use null-oblivious code directly without adding endless checks for nullness, most of which would be unnecessary noise. For the reasons just discussed, that would make a great many libraries unusable, and would in practice render NRTs unusable for many projects. Given this choice between being perfect and being useful, NRTs chose the latter. The practical effect of this is that sometimes, variables or parameters declared as non-nullable will contain null. It's important to remember that when evaluating the second type of complaint that I'm looking at.

Why argument 2 is wrong: nullable reference types can't be both perfect and useful

Arguments of the second kind against !! that I want to talk about are the ones that essentially say: this shouldn't be necessary. The argument, sometimes explicit and sometimes implicit, is that nullable references types should have been perfect, at which point you would never have any need for !! because declaring a parameter as non-nullable would be enough to guarantee that it couldn't possibly be null (meaning a check would be redundant).

Here's an explicit example of that argument:

I'm saying it's badly implemented.

If non-nullable reftypes were actually non-nullable, then this thing wouldn't be needed.

You wouldn't need to throw argument null exception if you are guaranteed that a non nullable type isn't null.. But here we are...

This is a slightly different form. This argues that we should have used a completely different solution to the problem, but amounts to the same thing—if only we had used a perfect solution we wouldn't need !!:

All of this monkey business instead of just adopting Option<T>. Still it is an improvement.

That "just" is doing an awful lot of work here. Let's "just" introduce a fundamental shift in how we handle something as basic as references to objects two decades after the runtime first shipped, something that is entirely different from how every existing library currently deals with object references. This commenter is right that the only way to do a watertight job of this is to tackle it at a fundamental level, but there is no "just" about that. (To be fair, he also recognizes that given where we are, !! has its merits.)

The point of C# nullable reference types is that they offer a pragmatic approach. It will never be as comprehensive as a purist solution of the kind proposed by the preceding comment, but it has the considerable benefit of being a viable option for C# developers working in real applications today. So all the arguments of this form carry no weight because they either presume the availability of a solution that is both perfect and pragmatic (while offering no evidence that such a thing is possible) or they discount pragmatism.

If you're prepared to discount pragmatism, there's actually nothing stopping you from using a solution like Option<T> in C#. You will run into the same problem that NRTs do: existing libraries are built in the old null-oblivious style. Your solution will deal with this less well than NRTs do. But if you value the purity of your approach highly enough, you're free to try and live without all those libraries. Good luck with that.

Why argument 3 is wrong: the !! isn't part of the name

There's another common misunderstanding, based on an apparent perception that !! becomes part of the name. For example:

I don’t think this is a good feature. Names should just be names and types should be types. Why would we want to mix type information into the name? Like saying all Integer names must begin with int

Or this:

https://github.com/dotnet/csharplang/discussions/5735#discussioncomment-2141754

We don't make variable names have special meaning anywhere else, and the industry knowledge about the failure modes that Hungarian notation set up for doing this by convention suggests this does not work out well.

These are somewhat baffling. I can only suppose that the people making these comments haven't actually tried the feature. (To be fair, it does currently require you to build your own copy of the C# compiler. Then again if you're going to go out of your way to criticise something, consider finding out whether it actually works like you think it does. "It's stupid that it works this way" isn't a great argument if it turns out not to work that way.)

If you do try this for yourself, you will find that when a parameter is declared as, say, string foo!!, it's called foo. The annotation does not become part of the name—it's not called foo!!. This shows that the comparison with Hungarian notation his thing completely inverted. With Hungarian, the annotations have no significance to the compiler, and are purely conventions expressed through variable names. By contrast, this new language feature is an annotation expressly recognized by the compiler and which does not in any way affect the name of the parameter.

This misunderstanding highlights an interesting facet of the discussion. A lot of the commenters evidently see this annotation as providing information for the consumer of the method: they regard it as a kind of embedded warning to say "don't be passing null now, will you?" And if you do look at it that way, then it would indeed be a bit baffling to have both this and the distinction between string and string? which also provides (or expressly omits) an equivalent warning. But where this argument goes wrong is that once again, it ignores the fact that the !! is purely an implementation detail. If you're using a NuGet package, you will not know whether it uses !!. That information is erased at compile time: if you're working with a compiled library, there's no detectable difference between using !! and explicitly writing the equivalent code to throw the exception. So the !! is not a signal for you, the caller of that method; it is an instruction from the author of that method to the compiler.

So at component boundaries, these discussions of what the !! signifies are irrelevant because you won't be able to see it. Things are slightly different when the annotated method is in the same solution as the code calling it because you will then be able to see the !! if you go and look at the actual source for the method.

So is anything wrong?

There is a valid objection: this syntax is not remotely self-explanatory and it's plain weird that something that's fundamentally an implementation detail gives the (false) impression of being part of the method's signature because of where it appears. For me, this means that the question is: are there better alternatives? One is not to have the feature—we've survived as far as C# 10.0 without it, and C# 10.0 even introduced a feature that makes this particular case considerably less onerous. Another is to complain that this is a narrow solution to a narrow problem, and it would have been better to expend effort on something more general, like a mechanism for contracts. Reasonable people can disagree about this, and since the goal of this blog was to address arguments that were plain wrong I won't argue this point either way.

Ian Griffiths

Technical Fellow I

Ian Griffiths

Ian has worked in various aspects of computing, including computer networking, embedded real-time systems, broadcast television systems, medical imaging, and all forms of cloud computing. Ian is a Technical Fellow at endjin, and Microsoft MVP in Developer Technologies. He is the author of O'Reilly's Programming C# 10.0, and has written Pluralsight courses on WPF (and here) and the TPL. He's a maintainer of Reactive Extensions for .NET, Reaqtor, and endjin's 50+ open source projects. Technology brings him joy.