I should start by stating the IoC and DI are not unique to ASP.NET MVC. They have been around a long time. It's just that MVC supports these notions particularly well. IoC is a principle, or an architectural pattern. There are quite a few definitions of IoC, but the basis behind it is always the same - it helps towards a loosely coupled architecture. DI is one way in which IoC can be applied, according to one school of thought. DI is IoC according to another. So far, you may be none the wiser, so I will use a bit of code based on the Contact Manager ASP.NET MVC tutorial to help illustrate the basic problem that they or it are/is intended to solve:
public class ContactController : Controller { private ContactManagerModelDataContext entities = new ContactManagerModelDataContext(); public ActionResult Index() { return View(entities.Contacts.ToList()); } } public class ContactController : Controller { private ContactManagerEntities entities = new ContactManagerEntities(); public ActionResult Index() { return View(entities.ContactSet.ToList()); } }
Just to clear up any initial confusion, yes - the example above is more or less the same code twice. One uses LINQ to SQL (the first one) and the second uses the Entity Framework for data access. Both versions have a dependency on the data access service or layer, in that neither controller can do their thing without it. However, in both versions, the specifics of the dependency is hardcoded into the Controller. That's called "Tight Coupling". In the first version, a LINQ to SQL DataContext is instantiated and referenced, and a System.Data.Linq.Table<T> object is passed to the View. In the second, an Entity Framework ObjectContext is referenced, and a System.Data.Object.ObjectQuery<T> is passed to the View.
There is no separation of concerns here in that the data access layer is embedded in the Controller. So what's the problem with that? Well, imagine that the two versions are an example of "before" and "after". In other words, you began by using LINQ to SQL, and then were required to change the data access to the Entity Framework (or ADO.NET, nHibernate, Typed DataSets or any number of other methods). OK, you think - it's just a couple of lines that were changed. Now imagine that these Controllers had 20 or 30 Actions, and that there are 20 or 30 Controllers. Now you begin to see the problem...
"Oh, but that will never happen to me", you say. And I can to an extent sympathise with that: the only data access changes I have ever made to an application were through my own choice. Until a couple of weeks ago. We weren't expecting it at all. But it happened nevertheless and was forced on us.
When you view Data Access as a "component" of your application, you should also start to see that there are other areas in an application which could be viewed as components. An emailing service, a logging service, caching, validation etc. All of these could be implemented as components. You may already be using ready-built components such as those found within the Enterprise Library. They could all potentially become dependencies of Controllers or even eachother. And all could be subject to change. You might decide to experiment with one, and then after some work, decide it doesn't suit you, so you need to exchange it for another. Now that happens to me quite often.
Another problem with the tightly coupled examples is with Unit Testing. If you are taking the trouble to read this article and haven't got round to a structured testing regime for your applications yet, chances are that you will. At the moment, if you write a test for the Index Action of the ContactController, you will cause a call to the database. If you take a Test Driven Design approach, you probably won't even have a database at this point, let alone any useful data in it. And if you have created a database, you need to stuff it with dummy data so that it can return something tangible. And testing against databases (when you have hundreds of automated tests to run) is slow. So you create a Mock. A mock in this case is service that simulates or mimics the database. It will generate collections for you, but you need to replace the data access code in the Controller so that the Controller is dependent on the mock object instead of connecting to and querying the database. And you need to do that every time you run the tests, and then undo it again, which will be time-consuming and messy.
Dependency Injection
Inversion of Control really describes what happens when you move away from a procedural programming style, where each line of code is run in turn, to a more (for example) event-driven style, where code is fired as a result of external factors. In other words, in procedural programming, control of what happens and when is entirely in the hands of the programmer. In an event-driven application, the control over what happens and when is inverted or placed in the hands of the user. One of the benefits of moving away from a procedural approach to a more object-oriented style of applications development is that you need a lot less boilerplate or repetitive code. In a procedural application, if there was a need to make 4 calls to a database within a function, you would have to write code to connect to the database and interact with it 4 times. When you run the programme, it will run from top to bottom. The code is entirely in control of that. Taking a more abstract approach allows you to write the database code once (as a service or component), and to provide a means of supplying it to the function that needs it at the point it is needed (if at all). In other words, the dependency (the data service) is injected into the calling function.
It's a bit like the difference between having to walk around with this lot in your toolbag:
and this:
Imagine that the main body of the tool in the second image is a Controller. Each of the attachments that plug into it are different data access components. One is for LINQ to SQL, one for the Entity Framework and the last one uses nHibernate internally. The reason that they can so easily be swapped in and out is that they all share a common interface with the main body of the tool. The main body has been designed to expect a component that has been tooled to fit. So long as additional attachments implement the correct interface, there is no reason why you can't build a lot more - a toothbrush, for example, or an egg whisk - and be sure that they will all fit. Ok - this example isn't perfect: the attachment or component is not providing a service to the body of the tool so the analogy falls down a little, but the principle behind common interfaces should be clear.
This brings us onto a key Design Principal - Code to an Interface, not an Implementation.
Right. First, we need an interface, which will define the methods that the data access component will implement. In this example, there is only one method so far:
public interface IContactRepository { IList<Contact> GetContacts(); }
Now we move the actual data access out of the Controller into a separate class that implements the IContactRepository interface:
public class ContactManagerLinqRepository : IContactRepository { public IList<Contact> GetContacts() { ContactManagerModelDataContext entities = new ContactManagerModelDataContext(); return entities.Contacts.ToList(); } }
This particular example uses LINQ to SQL, but the Entity Framework version is very similar:
public class ContactManagerEntityRepository : IContactRepository { public IList<Contact> GetContacts() { ContactManagerEntities entities = new ContactManagerEntities(); return entities.ContactSet.ToList(); } }
If we go back to the Controller, the change required to make use of the Entity Framework version is as follows:
public class ContactController : Controller { private readonly IContactRepository iRepository; public ContactController() : this(new ContactManagerEntityRepository()) {} public ContactController(IContactRepository repository) { iRepository = repository; } public ActionResult Index() { return View(iRepository.GetContacts()); } }
When the parameterless constructor is called, it automatically calls the second constructor that has the IContactRepository parameter. That's what the : this(new ContactManagerEntityRepository()) does. It's called constructor chaining. And it is at that point that the dependency (the ContactManagerEntityRepository) gets injected into the Controller. But now, if you look at the Index Action, you can see that the GetContacts() method call is made against the IContactRepository interface instead of an instance of a concrete implementation of that interface (which is what the ContactEntityManagerRepository is).
If you wanted to swap the EF based data access component out and replace it with the LINQ to SQL version, you would simply have to make one change to the Controller. You would no longer pass in an instance of the ContactManagerEntityRepository class into the chained constructor. You would make that ContactManagerLinqRepository instead:
public class ContactController : Controller { private readonly IContactRepository iRepository; public ContactController() : this(new ContactManagerLinqRepository()) {} public ContactController(IContactRepository repository) { iRepository = repository; } public ActionResult Index() { return View(iRepository.GetContacts()); } }
This is a lot better. The Controller is a lot more loosely coupled to the nature of the component. And this is how both the Contact Manager and the Nerd Dinner samples implement DI. Changes can be made quite simply, although if you have a lot of dependencies things can be a little more complicated. You could use Find and Replace to update multiple controllers, so long as you can be sure you don't accidentally alter other files where the Replace operation breaks things. Hmmm... Do you hear a faint creaking sound in the far off distance? Maybe this approach isn't that rock solid after all, and perhaps that's why the approach outlined above is also known rather derogatorily as "Poor Man's Dependency Injection".
IoC Containers
Enter the Inversion of Control Container. There is a massive problem with these: Their name. I could go on a rant here about the plethora of simple concepts being totally obscured by the mangling of the English language, but that's for another day. An IoC Container's job is quite simple. You use one to create a mapping between interfaces and concrete types. The containers you tend to see talked about most often for .NET are:
Unity
CastleWindsor
StructureMap
Spring.NET
Ninject
Autofac
At runtime, they take care of the responsibility of passing in the correct type. There are lots of IoC containers available. Many of them are open source or freely available. Most of them share the same features, in that they will resolve injection at constructor level (as is required in the above examples) or property or method level. Generally, you would use an xml file to create the mappings - perhaps the web.config file or the container's own configuration file. Some of them also provide an API for configuration in code. On the plus side, xml configuration doesn't require you to recompile when you make changes. On the downside, I dislike xml config files.
Using StructureMap
StructureMap, at basic level is simple to use. It only takes a couple of steps. The first is to change the Controller code so that the Interface is passed in to the default constructor:
public class ContactController : Controller { private readonly IContactRepository iRepository; public ContactController(IContactRepository repository) { iRepository = repository; } public ActionResult Index() { return View(iRepository.GetContacts()); } }
You see now that the original empty constructor with its chain has gone. The thing is that the default ControllerFactory used by ASP.NET expects controllers to have parameterless constructors, so we need to create our own ControllerFactory, and get it to pass requests for controllers through StructureMap. StructureMap will examine each one as it comes and look for dependencies in the constructors, and then attempt to resolve those dependencies with concrete types that have been registered with it. So a new ControllerFactory class that inherits DefaultControllerFactory is created:
using System; using System.Web.Mvc; using StructureMap; namespace ContactManager.IoC { public class DependencyControllerFactory : DefaultControllerFactory { protected override IController GetControllerInstance(Type controllerType) { return ObjectFactory.GetInstance(controllerType) as Controller; } } }
The final thing that needs to be done is to register the new Controllerfactory in the Global.asax.cs, along with setting up StructureMap's mappings between interfaces and concrete types. Within the Application_Start() event handler in the Global.asax.cs we add:
ControllerBuilder.Current.SetControllerFactory(new DependencyControllerFactory());
to sort out the first bit, and
ObjectFactory.Initialize(registry => registry .ForRequestedType<IContactRepository>() .TheDefaultIsConcreteType<ContactManagerLinqRepository>());
to fulfil the second bit. And that's it. How easy was that? OK - a lot of credit goes to Jeremy Miller who is the driving force behind StructureMap itself, but behind that is the fact that IoC containers are a simpler thing than most people who haven't used them realise. There is obviously a whole lot more that StructureMap and other containers can manage for you but they are not difficult to get to grips with. If you find that documentation too obscure, or lacking for one of them, try a different one.
Summary
Loosely coupling dependencies is pretty much always a good idea. Poor Man's Dependency Injection can be enough for smaller applications that you know won't grow too much, but you can now see that IoC containers are also not that mysterious after all.