Write your own AutoMapper in C#


Sometimes you have to map an object to another representation in C#. And you think: Why isn't C# duck-typing capable?

You might hear of libraries like AutoMapper that do the tedious work of mapping one object to another with the same structure. This blog post will give a super simple introduction to how those libraries are working internally.

In the end a bit of a subjective topic on whether or not I would use such libraries.

The problem

Imagine you have the following two types:

public class BlogPost
    public int Id { get; set; }
    public string Title { get; set; }
    public DateOnly PublishedDate { get; set; }

public class BlogPostDto
    public int Id { get; set; }
    public string Title { get; set; }
    public DateOnly PublishedDate { get; set; }

Structurally there are identical. In a duck-typed language like golang or TypeScript you can use those types interchangeably. But in C#, you can not. So if you have to map those types from one representation to another, you have to do the tedious work of newing up a new instance and map them one by one. That is where libraries like AutoMapper come into play. Another famous example is Entity Framework as it also has to map columns to properties of your entity. But how do they work?

Two ways to achieve that are Via Reflection or Source Generators. The latter has some advantages, like performance, and it's easier to debug, but they are less spread. For now, I will explain the more traditional way: Reflection based.

Reflection based

We want an easy API for using such a tool, so given our examples above with the blog post, I want the following usage:

var blogPost = new BlogPost { Id = 1, Title = "Steven", PublishedDate = new DateOnly(2023, 3, 18) };

var dto = Mapper.Map<BlogPost, BlogPostDto>(blogPost);

How AutoMapper and friends work is that they collect all properties via reflection and assign the values to the object. That is it. Well at least in its core. And this is very simple to do:

public static class Mapper
    public static TResult Map<TIn, TResult>(TIn obj) where TResult : new()
        var result = new TResult();

        var inputProperties = typeof(TIn).GetProperties();
        var resultProperties = typeof(TResult).GetProperties();

        foreach (var inputProperty in inputProperties)
            // Find the property that has the same name and type
            var resultProperty = resultProperties.FirstOrDefault(prop => prop.Name == inputProperty.Name && prop.PropertyType == inputProperty.PropertyType);

            // If it isn't writeable, don't try to write the value
            if (resultProperty != null && resultProperty.CanWrite)
                resultProperty.SetValue(result, inputProperty.GetValue(obj));

        return result;

And that is. We are just matching the names and types of both types and checking if they are the same. If so, we just the value. That approach is called convention-based. The convention here is that input and output must have the same name and type to be discovered.

A grain of salt

That looks pretty convenient. Well, yes and no. And here comes a very subjective part, so feel free to discuss this with me in the comment section. As initially said, mapping is often times tedious and most often considered boilerplate code. And that is a good thing! That means it is well-known and easier to understand. If you use libraries like AutoMapper, you still have this boilerplate... but now somewhere else, with more complexity.

It always starts easy, and you have direct one-to-one mappings. But as the product grows, there are exceptions to that. Often times I discovered that people then stick to the library and start configuring it. So now, instead of having obvious mapping code, you have the mapper and, at some place, some configuration, and often the configuration is way more complex than doing everything by hand.

Also consider that you introduced a new 3rd party library in your project you will have to support and maintain. Oh yeah, you might also consider performance regression. But to be honest, where this is absolutely true, it often doesn't matter at all. Yes using reflection is slower than mapping those simple properties by hand, but one call to a database or Web API is magnitude 2 to 3 slower, so that doesn't count in most cases. All in all, the steadily increasing complexity of such approaches is the culprit for me. If you have a super simple mapping that you don't have to configure the library, why not do this on your own? And if it is complex, usually such libraries are more complex than doing it via hand.

AutoMapper is just an example here for me. And please feel free to use it - I, personally, am not a big fan of it.


I hope I could give you a small introduction to how those libraries work and also given you a good overview of critical points to consider. As always, the source code is attached in the resources section.


  • Source code to this blog post: here
  • All my sample code is hosted in this repository: here

Creating Your Own Fakes Instead of Using Mocking Libraries

With respect to the current topic around Moq, I want to showcase how you can easily roll out your own fakes so that you are not depending on a third party library.¨

An alternative to AutoMapper

I am not the biggest fan of AutoMapper. It starts with good intentions but often ends up being a big mess. I have seen it used in many projects, and the configuration of the mappings is often scattered all over the place, and or they are huge!

Tutorial Unit and E2E Testing in Blazor - Part 1

This blog post should give you an easy and good introduction how to unit and end-to-end test your Blazor Application. Furthermore it does not matter if we are running server side or client side aka WebAssembly. The main two libraries we are using is first bUnit for unit-testing and Playwright for end-to-end testing. So let's dive in!

An error has occurred. This application may no longer respond until reloaded. Reload x