In recent months and years, there was a certain hype around benchmarking, also in the .NET community.
This blog post is meant to ground some of the benchmarking topics and put this into relation to other things to better understand if it's worth the effort.
Disclaimer
I am 100% aware that part of my blog is benchmarking certain things - and as professionals, we're often obsessed with performance metrics and benchmarks. So even though I am doing this on my own and will also continue doing this, you always have to relate the results to your specific context!
A benchmark
Look at the following benchmark:
public class Benchmark
{
[Params(10, 100, 1000)]
public int ArraySize { get; set; }
private List<int> numbers;
[GlobalSetup]
public void Setup()
{
numbers = Enumerable.Repeat(0, ArraySize).ToList();
}
[Benchmark(Baseline = true)]
public void ForEach()
{
foreach (var num in numbers)
{
}
}
[Benchmark]
public void For()
{
for (var i = 0; i < ArraySize; i++)
{
_ = numbers[i];
}
}
[Benchmark]
public void Span()
{
var collectionAsSpan = CollectionsMarshal.AsSpan(numbers);
for (var i = 0; i < ArraySize; i++)
{
_ = collectionAsSpan[i];
}
}
}
And its result:
| Method | ArraySize | Mean | Error | StdDev | Ratio | RatioSD |
|-------- |---------- |-------------:|-----------:|-----------:|------:|--------:|
| ForEach | 10 | 14.558 ns | 0.3226 ns | 0.8328 ns | 1.00 | 0.00 |
| For | 10 | 6.053 ns | 0.1391 ns | 0.2166 ns | 0.41 | 0.03 |
| Span | 10 | 3.906 ns | 0.0988 ns | 0.0924 ns | 0.27 | 0.01 |
| | | | | | | |
| ForEach | 100 | 161.745 ns | 2.8664 ns | 3.4123 ns | 1.00 | 0.00 |
| For | 100 | 67.268 ns | 1.1961 ns | 1.1188 ns | 0.41 | 0.01 |
| Span | 100 | 50.935 ns | 1.0039 ns | 1.1158 ns | 0.31 | 0.01 |
| | | | | | | |
| ForEach | 1000 | 1,508.880 ns | 29.7007 ns | 36.4751 ns | 1.00 | 0.00 |
| For | 1000 | 636.547 ns | 12.6593 ns | 25.8595 ns | 0.42 | 0.02 |
| Span | 1000 | 337.738 ns | 6.7882 ns | 16.6517 ns | 0.22 | 0.01 |
Obviously, we can see that some stuff is faster than others - even significantly, or not?
Well, here starts the journey I want to discuss. We can see that a foreach
is five times slower than the version via Span
. But we are talking nanoseconds here! That is what I meant by perspective.
A database or Web API
Benchmarks - especially the one I showed a second ago, show you results in isolation. But your code does not run in isolation and so context matters. You might have seen on platforms like LinkedIn or Twitter that you should always StringBuilder
for concatenating because ... well performance!!!
Back to context: Do you have a call to a database or an external API over the wire? Yes, well than those calls are most often magnitude 2 to 3 slower than your string.Format
calls. We are drawn to benchmarks because it is easy to relate those numbers in isolation and sometimes they seem impressive, but more often than not (>80% of cases), they don't matter in the grand schema of things. Imagine you refactor your code now to use Span
to loop over a List. Sure you got some nanoseconds saved, but you might introduce nasty bugs and therefore reduced maintainability and readability.
My message here is: If you see the issue, always measure first (not in isolation - it has to be your specific scenario). Once you identified the issue you can make a plan, adopt and measure afterward! And big surprise, most of the times string.Format
vs string.Concat
will not help you.
Sure there are certain hot-paths where you did everything and you need to squeeze out the last nanosecond and remove every little allocation, but how often does that really happen? Again hot-paths are one way through your application and not the application as a whole.
Performance Improvements
Again there are much better strategies you can resort to before you do those microoptimizations (not in a particular order):
- Remove code. Yes, that sounds stupid, but code that doesn't run doesn't need time or allocations.
- Elimnate roundtrips. If you have a call to a web API or database, it makes sense to remove the number of calls you do as you are limited by physics and how fast a package needs from you to the server and back.
- Data structures. Yes, the general joke of software engineering. But certain data structures are better for certain scenarios. Take a huge list that is kind of constant, and you only need to look up values. A
HashSet
can do lookups way faster than your averageList
but falls off if you need to add stuff. - Caching. Now that is a tricky one. You might have heard that famous quote from Phil Karlton: "There are only two hard things in Computer Science: cache invalidation and naming things". It often starts easy and can get tricky pretty quickly.
Conclusion
To summarize, always check your scenario in its context and not in isolation. Then you can make a plan and benchmark that scenario with the appropriate fix. Micro-benchmark is fun but can distract from the important facts.