Conceptual view

Why is using AutoMapper for DTO to Entity mapping a bad idea? AutoMapper does its job by using reflection. When you use reflection you divert from the normal path and you manipulate data that you potentially shouldn't. In other words: we enable our domain model to be mutable from everywhere. This makes it very hard to track down from where in the code a certain property can be modified. This drills down to how will you write tests if anything from anywhere can modify properties? Which part of your application will be responsible for enforcing the business rules. How can we ensure that we are in a consistent state?

Inconsistent state

If every part of your application can change properties directly in your domain model you lose the possibility to validate domain logic. Let's say we are working with triangles. When we receive an instance of a TriangeDto class we want to be sure that the values are consistent. We know that our triangle is valid if the sum of all the angles is 180°. How can we be 100% that the sum will always be 180? We can't. Because we allowed AutoMapper to make changes in such a way that they are invisible.

Another example is conditional logic. When you set the value of a property you don't have the value of other properties available (in the setter of a property). You cannot enforce that when property X has value A that property Y cannot/should/could have value B.

Thirdly, when you want to reuse domain logic inside a class (e.g. value object) you can never be sure that the logic in the class was used and not bypassed by AutoMapper (by directly setting the value of a property, while the property used a private setter). For example, when you place an order you want to be sure that the amount is a positive or natural number.
If you use a value object with validation then you are always sure that when you get an instance of that class, the values are correct. E.g. The amount is non-negative, correct ISBN number or a correct VIN number. See: ValueObjects as a security solution

Fourth, all your access paths are visible if you use constructors or factory methods (method on the same class, calling a private constructor), it also makes it easier to test. You can trace down from where the domain was called. It all comes down to: preventing the domain from becoming inconsistent.

Technical view

If you use an ORM like Entity Framework you will lose change tracking. Instead of using a tracked instance you are forcefully creating a new instance via reflection. This will become a real issue when you have relationships between entities. Your ORM will recreate all the navigation properties and the primary keys will collide (if you are lucky: meaning your problem becomes visible). It will always add a new child instead of updating it.

Author's opinion
What is the opinion of Jimmy Bogard (the author of AutoMapper)?

There is no two-way mapping because we never need two-way mapping. There was a point very early on where we were at a critical junction, and could decide to do two-way mapping. But we didn’t. Why? Because then our mapping layer would influence our domain model. I strongly believe in POCOs, and a very writeable domain model meant that POCOs were out. What exactly would two-way mapping do to our domain layer?

  • Force mutable, public collection , like “public EntitySet Categories { get; }” <- NO.
  • Make testing much, much harder, as we only ever wanted to update a portion of a domain model
  • Force our domain model to be mutable everywhere
    So my question to those wanting two-way mapping:
  • What scenarios are you looking at doing two-way mapping?
  • What impact would two-way mapping have on the originating source type?
  • How would you test two-way mappings?