x86 vs x64 in .NET

9/30/2022
9 minute read

This article will explore the difference between x86 and x64 when it comes down to the .NET framework. So I will spare you with lots of details about the architecture itself (maybe some history lesson but that is all). The motivation is simple. In times of cloud servers and IaaS you can often choose between x86 and x64. Side note: A Azure Web App is by default x86 aka 32-bit and not x64. So we want to see what are the implications of them or are there scenarios where I want to force some or another.

Naming

Often times you have x86, which somehow represents 32-bit and x64 which represents 64-bit. x86 comes from the old 8086 processor of Intel which (mainly) used the 32-bit architecture, with a 32-bit instruction set. There are many 32-bit architecture like armv7l, which have a different instruction set. An instruction set is basically all operations your CPU understands. The bitness describes how wide a register in said CPU is (as well as other implications like pointer lenght and so on). Back to topic: x86 thanks to the Intel processor family is the name of a 32-bit instruction set. On the other hand we have x64, which represents the 64-bit architecture. If you want to be super correct, there are two variations: x86_64 (which indicates that it supports all the x86 instructions and all the x64 instructions) and there is amd64. For virtually all cases amd64 and x86_64 are the same, just different companies giving different names.

AnyCPU

Enough with the history and back to .NET. What does that mean for your everyday .NET application? First: Nothing. Why? Well, if you check the default when you build something is AnyCPU indicating that your .NET code runs on, drum roll, any ... CPU. Well shocking, ain't it?

Why is that? The reason is simple. I will reference here my own blog post called What is the difference between C#, .NET, IL and JIT?.

IL

You see when you compile your C# code to CIL, MSIL (the intermediate language), that intermediate language has no concept of bitness. That is the whole point. It abstracts away instruction sets and bitness for you. Where then does it come into play? And here comes the Just In Time compiler into play. The JIT translates the IL code into native instruction of the given architecture and operating system.

As default AnyCPU is the best you can choose, because it will take the native bitness and architecture of the hosting enviornment. So if you are using Windows x64, then it uses x64 instruction set. There are reason where all that matters: Native interop. Before I come to that I want to show you some real implication of the JIT when it boils down to bitness.

x86 vs x64 natively

There are noticable differences between those two, I mean there have to be. I hope you heard of pointers. Pointers are basically addresses where your variables are stored. Those pointers are 32 bit or 64 bit wide depending if you have a 32 bit execution or a 64 bit execution. Remember that part, as we pick this up later. Again this boils down to the registers of the CPU. They are 32 or 64 bit wide so is a pointer. Super important to understand is, that data types don't necessarily follow that! In C/C++ an int on x64 does not have to be 8 byte or 64 bit wide!

There are now 2 main things deriving from that pointer thing:

  1. If you have an application with native interop (ha! we picked it up again!) it is important to have the same bitness.
  2. You want to allocate more than 4GB ram in your application.

Native interop

Let's go native here! Let's say you have a C/C++ interface, which accepts a struct like this:

struct SomeData
{
    int* someIntPointer;
    int integerInstance;
}

Assume now we have a 32 bit application. Then our struct is 8 byte wide (4 byte for the pointer and 4 byte for the integer instance). Interop relies heavily on that fact that you just map the memory one to one. There is no real type checking involved in C/C++. If you pass 8 byte than it mapes it to that struct, done! And here is the dangerous part. Let's assume we have a 64 bit C# application. Then alone the first pointer someIntPointer is 8 byte wide. You see the problem? Cluster-fuck!

4GB

That one is simple, if you have a 32-bit integer which reflects the address in memory than you can only map 2^32 bytes = 4GB. If you want more, you are out of luck! Yes there are means to come around that, but they come with bigger penalties.

To be clear, that:

int a = 0; // How wide is it now? 32-bit or 64.bit?
int max = int.MaxValue;
int min = int.MinValue;

Are they different on different architectures? Short answer: no. As said earlier, your C# code is mostly free from bitness. int is an alias for Int32 telling you that it is always 32-bit. There is a data type called nint and nuint which are natively sized integers. So if you are on 32-bit it is 32-bit wide and 64-bit wide on a 64-bit platform. They are used to write interop code where you cover both cases: 32-bit as well as 64-bit. Before nint and nuint you had to deal with raw pointers to pull that off. So thank you Microsoft!

.NET

As initially said, there are differences where it matters. My example from the beginning is Azure. In Azure you can tell which bitness you want to have. Furthermore did you ever use dotnet publish? There is a toggle, which allows you to create a self-contained executable:

dotnet publish -r RID

RID means Runtime-Identifier. Basically what dotnet allows you is to create a self-contained executable of your application. Self-contained means that it ships the .NET framework alongside your application (at least the parts you need). If you have a look at those identifiers, you can see stuff like this:

  • win-x64
  • win-x86
  • win-arm
  • win-arm64

The main reason why you have to choose the instruction set is as said earlier, the JIT is now part of your bundle. So you have to tell what instruction the JIT has to transform from the IL code to the native machine or OS code.

I hope until now you understood why you might select a specific "bitness". Again if you are solely in the .NET world where everything is under your control, choose AnyCPU and don't worry about that. If you need more than 4GB virtual space, you are more or less forced to use x64. Besides from that there are other implications I want to highlight:

Garbage Collector

The Garbage Collector (short GC) is a nice tool. If you are not aware how that thing works, I would argue, go and checkout my article The garbage collector in .NET and come back afterwards as it is vital to understand how it works. At best take both parts to get the whole picture.

If you have a 32-bit application, which effectively only has 2GB of RAM avaiable in .NET and you do some stuff your GC will run more often. Basically the GC gets triggered if it things you might run into trouble. With a smaller virtual space you statistically run more often into trouble. So it runs more often. So there could be a performance gain just by running 64 bit. Another effect I already explained in the second Part The garbage collector in .NET - Part 2: Compacting. Thanks to >gen2 of the GC aka the Large object heap there are holes inside your application which have to be compacted together so you can allocate bigger blocks of memory. Again if the address space is bigger there is less chance that those holes make a difference. Just imagine the worst case. You have 2 GB virtual space and every other byte is allocated. Now you want to allocate an array of 2 byte. Well technically you have that memory, but practically all your free memory is always only 1 byte wide. Array's have to be contiguous by definiton! So you would get a OutOfMemoryException. Well you wouldn't because the GC would compact in that case but that comes with a cost. The same example on a 2^64 byte address space would work out of the box because you just allocate after the first 2 GB where everything is free.

Caching

I will quote the official Microsoft site here:

When an application can run fine either in 32-bit or 64-bit mode, the 32-bit mode tends to be a little faster. Larger pointers means more memory and cache consumption, and the number of bytes of CPU cache available is the same for both 32-bit and 64-bit processes. Of course the WOW layer does add some overhead, but the performance numbers I've seen indicate that in most real-world scenarios running in the WOW is faster than running as a native.

The first part still holds true. 64 bit needs more space and 32 bit applications seem to be more cache friendly. The article is from 2009. So CPU's back then where not the same as today when it comes down to caching etc. Most probable this is more or less neglectable. Always measure before you make decisions.

Other effects

As said earlier, with higher bitness comes higher power... no sorry, with higher bitness comes more memory. So if you compile your .NET executables natively for x64 you will also notice that your executables are bigger. Update 8th OctoberOriginally I wrote assemblies instead of executables. This generalization is not true. Purely managed libraries will see no effect (hence purely managed), but executables will. Thanks to Carl Walsh for the input.End Update 8th October

Every variable address is now 8 instead of 4 byte. Every function jump address is also now double as long! Yes every of your function is also a pointer.

Also if you have compute heavy programs which for example rely on int or even float/double you can see some gains. And the reason is simple. An int is always 4 byte or 32 bit wide, but you register is 8 byte or 64 bit wide. There are operations where you can load two integers in one register and make then operations on it, cutting the time of that computation in half. A bit SIMD like! A double for example is 64 bit by definition. On a 32 bit architecture, it does not fit natively in one register. So you need some synchronization between two registers to make arithmetic on double types which comes with a performance penalty. Again it depends on your use case. Always measure that stuff first before you make decisions. See also the contra points above.

Conclusion

Puh that was a lot of information and I hope I could give you a better picture what the implications on .NET are. Take AnyCPU as often as possible so you don't have to deal with that. Only when it comes to native interop you have to know the differences very well. Most of the times it will run with 64-bit instruction as this is most probable the default nowadays (even for a lot of ARM processors).

Branch Prediction - How much does an if cost?

In the blog post we will look a bit into branch prediction. What is it and how can it impact our code?

To explore this we will be Thomas the Signalman, which works at a very busy railroad.

IEnumerable vs IQueryable - What's the difference

.NET brings two types which seem very similiar

  • IEnumerable
  • IQueryable

What is the difference? Most are familiar with using IQueryable when we want to go to the database and back. But why not using IEnumerable?

Abstraction vs Encapsulation

Abstraction and Encapsulation are two fundamental concepts in object-oriented programming. So let us have a small look what the difference is between those two.

An error has occurred. This application may no longer respond until reloaded. Reload x