The following blog post is adapted from an internal presentation I gave at work a couple weeks ago.

Overview

In the last few years, Microsoft has shifted it's platform strategy drastically. Instead of being all about Windows, Microsoft has started to embrace the rest of the world. For example:

  • Microsoft SQL Server vNext (2017 or 2018) runs on Linux, and in Docker on macOS and other platforms such as my Synology NAS.
  • Microsoft support Remote Desktop for Linux on Azure, and have Remote Desktop clients on macOS, iOS and Android.
  • Microsoft Office runs not only on Windows and macOS, but also on iOS, Android and Windows Mobile.
  • Windows PowerShell became just PowerShell, and now also runs on macOS and Linux.

So it's of little surprise now that .NET runs everywhere. Through a mixture of .NET Framework, Mono, Xamarin, Unity and .NET Core, .NET can now run on:

  • Windows
  • Windows Nano Server
  • macOS
  • Linux
  • Raspberry Pi
  • Windows 10 Internet of Things Core
  • iOS
  • Android
  • tvOS
  • watchOS
  • Tizen
  • Xbox One
  • Wii U
  • PlayStation Vita
  • PlayStation 4

Slowly but quietly, NET has grown from a very Microsoft-centred platform to one of the broadest, outside of Microsoft's reaches.

But before we get too carried away with all of this, what exactly is .NET? How does it work?

What is .NET?

For most developers who target .NET, it's a big box of magic.

C#, F# and Visual Basic developers will typically write some code, and then this magical thing called '.NET' takes over and compiles and executes the code.

If you dig into how it all actually works, there are three major components that hold a .NET app togethers:

Application Code

This is the code that you actually write to power your application. You can write some code in C#, F#, VB, Managed C++ (if you really want) or any other language that compiles to IL[^1] (also known as CIL or MSIL).

Base Class Libraries

Most of your application code, though, isn't actually code you've written. You're mostly re-using code from someone or somewhere else, and saying "fill in this stub later". For example, to parse a string into an int, you don't actually write parsing logic. You just use int.Parse or int.TryParse. All of this code is located inside the Base Class Libraries.

This typically forms part of the runtime. For example, the BCL code shipped with .NET Framework 4.5 is different to the BCL code shipped with .NET Framework 4.6, and some functions will behave slightly differently, or may have recieved performance optimizations and run faster.

Microsoft publish most of their own BCL code as Reference Source[^2].

Common Language Runtime

The runtime is what actually executes your code. CPUs don't understand IL, so this contains a Just-In-Time compiler which compiles the IL into machine instructions. The CLR also handles loading assemblies (DLLs/EXEs), Platform Invoke (P/Invoke, or invoking native code from .NET), and other similar low-level responsibilities.

Quiet Proliferation

However, the .NET Framework isn't the only BCL/CLR out there. In 2004, the Mono Project came along with an open-source reimplementation that today runs on Windows, macOS and Linux. It also forms the basis of the scripting engine in the Unity game engine.

In 2012, Xamarin used Mono as the foundation for a new .NET runtime, Xamarin.Mac, which adds APIs for developers to build Mac apps in .NET. In 2013, they expanded into Xamarin.iOS and Xamarin.Android, and have since added Apple watchOS and tvOS into the lineup.

And then, with the release of Windows 10 in 2015, Microsoft created the Universal Windows Platform which runs atop a technology named ".NET Core 5".

Project K

Around the same time, Microsoft were working on ASP.NET 5 / MVC 6, the successor to their ASP.NET 4 / MVC 6 platform for web development. Codenamed "Project K", this platform brought some major innovations that were not immediately backwards compatible:

  • You could host an ASP.NET application in an isolated process, without being integrated into Microsoft's web server, IIS.
  • As part of decoupling from IIS, the entire System.Web assembly (and I believe namespace) was unavailable.
  • It was cross-platform and ran on macOS and Linux - either inside Mono, or a new smaller runtime named "CoreCLR".
  • If you used "CoreCLR", you could deploy the entire application including the runtime as an isolated instance, so multiple applications on the same server could each have individual runtimes and not conflict with each-other.

Eventually, after a bit of soul-searching and realizing that - in the great Microsoft tradition - their naming sucked, Microsoft rebranded the lot:

  • ASP.NET 5 / MVC 6 => ASP.NET Core 1.0
  • Entity Framework 7 => Entity Framework Core 1.0
  • CoreCLR => .NET Core
  • Developer tools (kre/kvm/kpm) => Developer tools (dotnet).

.NET Core

Now that we've arrived at .NET Core, let's talk a little bit about it specifically.

.NET Core is a new runtime for .NET Core. It is not perfectly compatible with .NET Framework, and does not intend to be. It has new things that .NET Framework does not have, and Microsoft have intentionally removed features that they deemed to be "problematic":

  • .NET Core does not have AppDomains. Instead, use process isolation.
  • .NET Core does not have sandboxing. Instead, use operating system restrictions.
  • .NET Core does not have .NET Remoting. Instead, use sockets, HTTP, or another standard communications/RPC protocol.

Microsoft have stated that .NET Core will serve as "the foundation of all future .NET platforms." These are big words from a big company.

The pros of .NET Core are that:

  • .NET Core gets new technology and APIs first. Microsoft can ship this faster as it is a standalone product, unlike .NET Framework which is integrated into Windows, and has to go through full Windows validation and testing for even minor releases.
  • .NET Core is cross-platform and runs just about anywhere.
  • .NET Core is faster than .NET Framework, for the time being. Many if not all of these performance improvements will eventually find their way into .NET Framework, but for now it's significantly faster.
  • You can bundle .NET Core applications with the BCL and runtime to create a native application for supported platforms with a disk footprint of less than 40MB.
  • .NET Core is fully open-source.
  • .NET Core is unit-tested, and the unit tests are open-source and easy to run. I ran the full suite on my MacBook Pro and found three failures, two of which led to bug fixes.

On the flip side:

  • Many traditional APIs are not available in .NET Core. You can sometimes get polyfills to bridge the gap, but not always.
  • There is no official support for FxCop. It appears to work if you manually modify your project file, but if you use FxCop you'll need to migrate to Roslyn Analyzers.
  • There is limited support for runtime tooling such as profilers. Jetbrains appear to have support already, but I haven't tried it.
  • There's no UI frameworks for desktop applications. You have two choices - console applications, and console applications that listen on port 80/443.

.NET Standard

As we've seen above, there are now a grand total of nine .NET runtimes, including the different variants of Xamarin. If you want to your code to run in as many places as possible, how do you do so? Do you have to write your code nine times? Do you have to compile it nine times?

Fortunately for most code, the answer is no. Microsoft have introduced something they confusingly named .NET Standard. In Microsoft's words:

.NET Standard is a specification that represents a set of APIs that all .NET platforms have to implement. This unifies the .NET platforms and prevents future fragmentation. Think of .NET Standard as POSIX for .NET.

In less confusing terms, .NET Standard is simply a set of APIs that runtime vendors can implement, and say "yes we support this version of .NET Standard".

For example, .NET Framework 4.5 implements .NET Standard 1.0 and 1.1 only. A .NET Standard 1.3 assembly is not guaranteed to run on .NET Framework 4.5, as it may use APIs that .NET Framework does not have. On the other hand, .NET Framework 4.6 implements the full set of APIs defined by .NET Standard 1.3, so you can safely run it there.

New SDK

Project Structure

To facilitate all these drastic changes comes a new SDK - actually, a second new SDK. Microsoft rebuilt the SDK around JSON-based projects, then gave up on it and reverted to MSBuild. The new SDK has much simpler projects than the old SDK, though.

With the old SDK, a clean project for a command-line app was 52 lines of XML. With the new SDK, it's just:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>netcoreapp2.0</TargetFramework>
    </PropertyGroup>
</Project>

That's it. All of the default junk that came with the old SDK - debug symbol types, default references, file alignment, optimization settings and so on - are now defaulted. You can specify them if you want to, to override the defaults, but for 99.99% of applications you won't need to, and probably never did.

Multi-Targeting

The new SDK also bring multi-targeting, i.e. one project file can product multiple executables for multiple platforms. To do this, edit the project file by renaming TargetFramework to TargetFrameworks. If you missed what the exact change is, you're pluralizing it.

Once pluralized, you can now fit multiple values in this field with standard MSBuild syntax, i.e.:

        <TargetFrameworks>net461;netcoreapp2.0</TargetFrameworks>

When multi-targeting, you have access to a bunch of conditional compiler directives. These come in three formats:

  • For .NET Framework targets, you get NET000, e.g. NET45 for .NET Framework 4.5 or NET462 for .NET Framework 4.6.2.
  • For .NET Standard targets, you get NETSTANDARD0_0, e.g. NETSTANDARD1_3 for .NET Standard 1.3
  • For .NET Core targets, you get NETCOREAPP0_0, e.g. NETCOREAPP2_0 for .NET Core 2.0

You can use these in [Conditional("...")] attributes or #if/#elif directives.

Dependency Management

With .NET Core and .NET Standard, everything is available exclusively on NuGet, the now-official .NET package manager.

Traditionally, this has been painful, but with the new SDK, NuGet has also gotten smarter.

NuGet references in projects now refer to the primary package or metapackage only. Your project no longer also contains a reference to every single transitive dependency in your dependency tree.

For me, this is the most significant change. I can now upgrade or uninstall a package without wondering what I'm leaving behind that I no longer need a reference to.

If you avoid using NuGet directly and use another tool instead, check to see if it supports the new SDK. At my work we use Paket[^3] extensively, and it has great support for the new SDK, target monikers and so forth.

Preparing for the Future

If your code targets .NET Framework now, you'd be pretty stupid not to keep .NET Core or .NET Standard as open possibilities for the future. To do so, there are a few things to keep in mind:

  • Avoid features exclusive to .NET Framework and Mono, such as AppDomains.
  • Be wary of calls to native code such as P/Invokes to the Win32 API. Have an alternative up your sleeve for different platforms, or make your dependency on that API call optional.
  • Be wary of third-party dependencies with native components, especially if they only provide native binaries for Windows.
  • Avoid APIs that were removed in .NET Core, such as Assembly.GetExecutingAssembly.
  • Use TypeInfo for your reflection needs. Microsoft have been pushing this since .NET Framework 4.5 was released. Instead of reflecting on typeof(Foo), reflect on typeof(Foo).GetTypeInfo().
  • Prefer modern, well-maintained third-party libraries over outdated, untouched ones.
  • Prefer third-party libraries which already provide .NET Standard implementations.

If you follow these simple steps, you should be in a pretty good place when the time eventually comes to port pieces of your codebase to .NET Standard, .NET Core, or whatever .NET Core-based platforms spring up in future.