Sunday, May 10, 2015

SOLID – Single Responsibility Principal

“Every class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.”

Our Goal

We have a few major goals when writing software:
  1. Easily maintainable and modified: Software should be easily worked on by competent software engineers.
  2. Understandable: A competent software engineer should be able to grok your code with little effort.
  3. Work Correctly

Followed well, point’s 1 and 2 lead to 3 being true decades from now. Even if you’re indifferent to fellow employees’ struggles with terrible code, looking at your own poorly structured code from 6 months ago can be its own an arduous task.

Classes

This is where the Single Responsibility Principal comes in. We want to create small classes that do one thing well. This lightens the cognitive load when trying to understand how parts of a system work together.

Let’s take a look at an example:

public class ClaimsHandler
{
    public void AddClaim(Claim claim)
    {
        if (claim.UserId == null)
            throw new Exception("userId is invalid.");
        if (claim.DateOfIncident > DateTime.Now)
            throw new Exception("This feels like a scam.");
 
        using (var sqlConnection = new SqlConnection(connectionString))
        using (var sqlCommand =
            new SqlCommand("dbo.CreateClaim", sqlConnection)
            { 
                CommandType = CommandType.StoredProcedure 
            }) {

            sqlCommand.Parameters.Add("@UserId", SqlDbType.UniqueIdentifier)
                .Value = claim.UserId;
            sqlCommand.Parameters.Add("@DateOfIncident", SqlDbType.DateTime2)
                .Value = claim.DateOfIncident;

            sqlConnection.Open();
            sqlCommand.ExecuteNonQuery();
            sqlConnection.Close();
        }
    }
}

Our ClaimsHandler class is responsible for validating a claim and saving the data. Even though this example is only 28 lines long its needs to be broken up. Specifically, the validation and save logic need to go into their own classes. Why? The validation and save logic will change and become more complex as time goes on.

Let’s forward the clock a year:

public class ClaimsHandler
{
    public void AddClaim(Claim claim)
    {
        if (claim.UserId == null)
            throw new Exception("userId is invalid.");
        if (claim.Amount <= 0)
            throw new Exception("amount must be greater than $0.");
        if (claim.DateOfIncident > DateTime.Now)
            throw new Exception("This feels like a scam.");
        if (claim.User.Status == "Regular" &&
            claim.Amount > 3500)
            throw new Exception(
                "Regular accounts are not allowed to claim over $3500.");
        if (claim.User.MaxClaimAllowed < claim.Amount)
            throw new Exception(string.Format("User max claim is {0}.",
                claim.User.MaxClaimAllowed));
        if (claim.ClaimType == "Rental" && claim.User.Status != "Gold")
            throw new Exception(
                "Rental claims are only available for Gold members.");
 
        using (var sqlConnection = new SqlConnection(connectionString))
        using (var sqlCommand =
            new SqlCommand("dbo.CreateClaim", sqlConnection)
            { 
                CommandType = CommandType.StoredProcedure 
            }) {

            sqlCommand.Parameters.Add("@UserId", SqlDbType.UniqueIdentifier)
                .Value = claim.UserId;
            sqlCommand.Parameters.Add("@DateOfIncident", SqlDbType.DateTime2)
                .Value = claim.DateOfIncident;
            sqlCommand.Parameters.Add("@Amount", SqlDbType.Money)
                .Value = claim.Amount;
            sqlCommand.Parameters.Add("@ClaimType", SqlDbType.NVarChar)
               .Value = claim.ClaimType;

            sqlConnection.Open();
            sqlCommand.ExecuteNonQuery();
            sqlConnection.Close();
        }
    }
}

Both the amount of validation and number of items sent to the stored procedure increased. This class has also become more difficult to read.

What if the save or validation logic is used somewhere else? Keeping the changes consistent across all of the variations is difficult.

Let’s break this class up:

public class ClaimsHandler
{
    private readonly ClaimsRepository _claimsRepository =
        new ClaimsRepository();
    private readonly ClaimsValidation _claimsValidation =
        new ClaimsValidation();
 
    public void AddClaim(Claim claim)
    {
        _claimsValidation.Validate(claim);
        _claimsRepository.SaveClaim(claim);
    }
}
 
public class ClaimsValidation
{
    public void Validate(Claim claim)
    {
        if (claim.UserId == null)
            throw new Exception("userId is invalid.");
        if (claim.Amount <= 0)
            throw new Exception("amount must be greater than $0.");
        if (claim.DateOfIncident > DateTime.Now)
            throw new Exception("This feels like a scam.");
        if (claim.User.Status == "Regular" &&
            claim.Amount > 3500)
            throw new Exception(
                "Regular accounts are not allowed to claim over $3500.");
        if (claim.User.MaxClaimAllowed < claim.Amount)
            throw new Exception(string.Format("User max claim is {0}.",
                claim.User.MaxClaimAllowed));
        if (claim.ClaimType == "Rental" && claim.User.Status != "Gold")
            throw new Exception(
                "Rental claims are only available for Gold members.");
    }
}

public class ClaimsRepository
{
    public void SaveClaim(Claim claim)
    {
        using (var sqlConnection = new SqlConnection(connectionString))
        using (var sqlCommand =
            new SqlCommand("dbo.CreateClaim", sqlConnection)
            { 
                CommandType = CommandType.StoredProcedure 
            }) {

            sqlCommand.Parameters.Add("@UserId", SqlDbType.UniqueIdentifier)
                .Value = claim.UserId;
            sqlCommand.Parameters.Add("@DateOfIncident", SqlDbType.DateTime2)
                .Value = claim.DateOfIncident;
            sqlCommand.Parameters.Add("@Amount", SqlDbType.Money)
                .Value = claim.Amount;
            sqlCommand.Parameters.Add("@ClaimType", SqlDbType.NVarChar)
                .Value = claim.ClaimType;

            sqlConnection.Open();
            sqlCommand.ExecuteNonQuery();
            sqlConnection.Close();
        }
    }
}

We are now free to modify ClaimsValidation, ClaimsRepository and ClaimsHandler separately. If either the validation or the save logic changes we don’t have to modify ClaimsHandler.

Your class should only have one reason to change.

Systems

The Single Responsibility Principal applies not only to your classes but extends down into your methods, and variables. More importantly, it applies to individual systems and the whole company.

Let’s say we have a Calculate Service (one application) that contacts some other services and calculates some data before returning a result.


Our Calculate Service is already doing too much. It’s responsible for:

  • Taking input from the Client Service
  • Requesting data from Service A
  • Requesting data from Service B
  • Calculating a result
  • Returning the result to the Client Service


Our application is brittle because it's doing too much it’s. Making a change to one responsibility has the potential to impact all other responsibilities. What happens when Data Service A’s API changes? What about when the calculations needed change? Each one of the responsibilities listed above should be their own application.

Let’s break this up:


This system looks more complex but all of this complexity was already present in the single Calculator Service. Moreover, it comes with the benefit of long term maintainability. If any responsibility changes we only need to make changes to that application and push it alone.

Another benefit of this new system is the ability to scale well. Before breaking up the system, if we needed to process more calculations we needed to push the whole application to more servers. However, if our Calculator Application is the bottleneck we can push just this one application to another server.

The Single Responsibility Principal is the most fundamental and important of the SOLID Principals. Apply it liberally, everywhere.

Monday, April 6, 2015

Reactive Systems

Manifestos

From time to time groups will try to succinctly dictate a series of guiding principals. A great example of this was the Agile Manifesto which taught software developers and their leadership to essentially adapt more than plan.

Until 2012 there was no succinct engineering explanation about the desired properties of a system built to take advantage of the cloud. The cloud used to be the territory of start ups who couldn't afford to personally run servers or web-scale companies who couldn't afford not to manage massive data centers all around the world.

More recently many companies of every size have seen the need for scalable systems. Due to massive increase in interest we are starting to see a consensus forming on the best practices for services in cloud environments. Enter the Reactive Manifesto.

Reactive Manifesto

The Reactive Manifesto was released in June 2012 by the good folks at Typesafe and Guru's like Erik Meijer.

The word "Reactive" is used because it's advocating systems that "reacts" to changes:

  • React to Messages (Be Message Driven)
  • React to Load (Be Elastic)
  • React to Failures (Be Resilient)
  • React to Users (Be Responsive)

Isolated Components

Before diving into the properties of a reactive system I think it's helpful to understand what an isolated component is.

An isolated component is a self-contained, encapsulated and isolated process. Your component could be a Window's service, Java Daemon, Erlang Module, RESTful API endpoint or any number of other technologies. It just has to isolate it's work from other components inside your system.



A and B are separate components. A and B do different work. They are isolated components.

Message-Driven

Having a Message-Driven system means relying on asynchronous messages between isolated components. We want to send a message to a different component in our system, have it read the message and do some work.

Let's take a look at an example:


  1. Component A sends a message to component B.
  2. B reads the message and does some work.
While this is a simple concept it's also powerful because:
  • A and B are Loosely Coupled:
    • A and B can be changed separately with little fear of one component's changes effecting the other.
    • A and B do not need to be on the same piece of hardware or even in the same building. A can place a message on a Message Queue and it will eventually make it to B.
    • B does not need to work on the message as soon as A is done. B can work on the message when it has the capacity to.
  • B is Non-Blocking to A:
    • A doing work is not tied to B being able to do work. After A sends the message to B, A is free to work on it's next task.
  • B can give Back-Pressure to A:
    • If B is falling behind on it's work, it can tell A to slow down or stop sending messages.
  • Because we are using a Message Queue we achieve Location Transparency:
    • A does not have to know where B is in physical space. A only needs to know the name of B's queue. The message will eventually make it to B.

Elastic

Your system needs to be elastic. Not only does your system need to be designed in a way that adding more capacity is a easy, it has to automatically add or remove capacity automatically. When the Reactive Manifesto was first introduced this property was named "Scalable." However, the authors didn't feel it emphasized automatic handling of capacity so it was changed to "Elastic."

A system has to be able to:
  • Scale up
  • Scale down
  • Detect when a component has failed and relaunch it

This ensures the following features:
  • Resilience:
    • If a component dies it will be relaunched.
  • Efficient use of resources: 
    • If a system is on a public cloud, it will only be charged for the minimum amount of resources it needed.
    • If it's in a private cloud, other systems of lesser priority will have access to resources when the resources become available.
  • Responsiveness:
    • If the system comes under heavy load we can scale to meet demand.

Resilience

Resilience is a measure of how quickly your system can recover from:
  • Software Failures
  • Hardware Failures
  • Connection Failures
Because each component encapsulates its own work, failures do not propagate to other components. Moreover, if server dies or a component crashes our system will relaunch the component somewhere else.

Responsive

The system should provide rapid and consistent response times even:
  • Under heavy load
  • When failures are encountered
The system responding in an untimely manor is not just considered poor performance, it's considered a failure and should be dealt with automatically.

Four Principals Interacting

At this point it should be obvious that each principal overlaps with other principals. Having a responsive system also means having a resilient system for the same basic reasons. This is usually demonstrated by showing the following diagram:


Message-Driven supports all of the other properties of Reactive Systems. Elastic and Resilient support each other and Responsive.

An N-Tier Application

Let's try to apply these principals. We want to build a system that:
  • Takes a request
  • Contacts an external resource for some data
  • Does a calculation
  • Stores the result

An N-Tier system would look like:



This design is fine for a small system that does not experience a lot of traffic. Limited to thousands of requests a day, this a perfectly appropriate solution. However, if it gets hundreds of thousands or millions of requests we need to start breaking it apart into isolated components.

This design also suffers from a number of problems:
  • All work for each request is done all at once:
    • If any part of the system fails, including the External Resource (which our system has no control over) the whole request fails and must be tried again.
    • Parts of our system sit idle waiting for other parts to finish. The Business Layer cannot do calculations until a result is returned from the External Resource.
  • In order to scale this application we have to deploy the whole system multiple times.
  • If we need to update one piece of the system there is a higher probability of effecting functionality in a different layer.

A Reactive Implementation

Let's take a look at a reactive implementation.



While this system is more complicated it provides a number of benefits:
  • Independent Scalability:
    • If Calculator takes more time to do work the system can spin up more instances of it independent of the rest of the system.
  • Isolated Components:
    • If Calculator fails or is slow it will not stop Data Retriever or DB updater from working.
    • The External Resource's impact to our system is isolated to the Data Retriever component.
    • I can make changes to any component with little fear of effecting the other components.
  • These components can be spread across the globe and I don't have to know their locations.

It's All Been Done Before

The principals outlined in the Reactive Manifesto are not new. In the 80's Erlang was using them. There's even significant overlap in Service Oriented Architecture and Microservices. However, having a succinct and accessible way to communicate a complex topic like cloud architecture is a great way to get discussions going inside of companies.

Sunday, March 29, 2015

Dapper - A Simple ORM

Entity Framework

Entity Framework is a full featured ORM. This is nice when performance is not important but rapid development is. Coding Linq statements instead of stored procedures can speed implementation dramatically.

EF's ease of use has trade-offs. All the behind the scenes work means slower performance and frustrating gotchas. Occasionally EF's generated SQL differs from the desired result. This inevitably necessitates a greater understanding of EF's inner workings. At that point it's just easier to write your own SQL.

Dapper

Dapper is a Micro-ORM. It focuses on simplifying database interactions with extension methods for query and execute commands. It also auto-maps values to POCO objects.

That's more or less it. Because Dapper's not managing relationships and data in memory its performance is an order of magnitude better.

According to Dapper's Github Page, “Select mapping over 500 interactions and POCO serializations” section:
  • Hand Coded: 47ms
  • Dapper: 49ms
  • Entity Framework: 631ms

There is almost no performance hit because Dapper doesn't do too much.

Stored Procedure Example

Let's write code to return a list of Diamond Club Members from a Stored Procedure.

Here's our POCO:

private class Member
{
    public string Guid {get;set;}
    public string Name {get;set;}
    public string MemberType {get;set;}
}

Let's look at an ADO.NET example:

private IEnumerable<Members> GetAllDiamondClubMembers()
{
    using (var sqlConnection = new SqlConnection(connectionString))
    using (var sqlCommand = new SqlCommand(
        "dbo.GetAllMembers", sqlConnection){
             CommandType = CommandType.StoredProcedure }) {

        sqlCommand.Parameters.Add("@MemberType", SqlDbType.VarChar)
            .Value = "DiamondClub";

        sqlConnection.Open();

        var sqlDataReader = sqlCommand.ExecuteReader();

        var members = new List<Members>();

        while(sqlDataReader.Read())
        {
            members.Add(new Member
            {
                Guid = sqlDataReader["Guid"],
                Name = sqlDataReader["Name"],
                MemberType = sqlDataReader["MemberType"]
            });
        }

        sqlConnection.Close();

        return members;
    }
}

In the above example we are:
  1. Creating a SQL Connection
  2. Creating a SQL Command
  3. Loading the SQL Command
  4. Opening the SQL Connection
  5. Executing the Stored Procedure
  6. Iterating through the returned values
  7. Mapping each individual value
  8. Closing the connection
  9. Returning the results

Dapper dramatically simplifies this:

private IEnumerable<Members> GetAllDiamondClubMembers()
{
    IEnumerable<Members> results;

    using (var sqlConnection = new SqlConnection(connectionString))
    {
        sqlConnection.Open();
        results = sqlConnection.Query("dbo.GetAllMembers", 
            new { MemberType = "DiamondClub" },
            commandType: CommandType.StoredProcedure);
    }

    return results;
}

Now we:
  1. Create the SQL Connection
  2. Open the SQL Connection
  3. Run the Query
  4. Return the results

Dapper has taken care of most of the rudimentary details for us. This is a healthy compromise between Entity Framework's performance hit and ADO.NET's code requirements.

Saturday, March 28, 2015

Microsoft Unity IoC Container

SOLID

Because we’re following SOLID code design we should have:
  • Small classes
  • Concrete classes that implement interfaces (with very few exceptions)
  • Any interactions with a concrete class should be done using the interface
  • Classes that other classes depend on should be injected into them during instantiation (preferably using the constructor)

A natural outgrowth of these rules is IoC.

IoC

IoC stands for Inversion of Control. It’s a byproduct of the Dependency Inversion principal. Instead of classes higher in the dependency chain determining what concretion of a class they should be using, a separate context determines which concretions get used.
This is confusing so let’s take a look at some sample code.

public class A
{
    public void LogIt()
    {
        var log = new Log();
        log.LogMe("I'm not testable!");
    }
}

public class Log
{
    public void LogMe(string message)
    {
        Console.WriteLine(message);
    }
}

In this example A creates an instance of Log and calls the LogMe method. A is Higher on the dependency chain and determines which class to use to call the LogMe method.

This is bad for a few reasons:
  • I can’t test A desperately from Log. For larger classes and more dependencies this makes unit testing impossible. As the different possibilities pile up (code paths or combinations) testing become exponential.
  • If I want to change A's use of Log but preserve Log in it's current form (if other classes use Log) I am forced to modify A as well. This is especially frustrating if I want to call LogMe still.

Let’s take a look at a more maintainable code base:

public class A : IA
{
    private readonly ILog _log;
 
    public A(ILog log)
    {
        _log = log;
    }

    public void LogIt()
    {
        _log.LogMe("A is now testable!");
    }
}

public interface IA
{
    void LogIt();
}

public class Log : ILog
{
    public void LogMe(string message)
    {
        Console.WriteLine(message);
    }
}
 
public interface ILog
{
    void LogMe(string message);
}

The above example's code has increased some because we inject a concrete instance of Log as an interface ILog into A. A is now not in charge of determining what concrete implementation of ILog is used. We've also created interfaces for both A and Log.

This is:
  • Testable: I can test A and Log separately.
  • Swappable: If I need to I can specify a different concretion of ILog with no modification to A.

Notice that the concreate implementation of ILog is injected into A’s constructor and the interface ILog is private inside of A. This is called Constructor Injection.

There’s also a concept called Property Injection where a class exposes its dependencies as public properties. This is not a good idea as other classes could change your dependencies.

Enter MS Unity IoC Container

This leaves us with a problem. How do we organize and assemble all these classes together? In particular, if we have lots of small classes that call each other there’s going to be a lot of work done up front.

You can manually assemble the classes when your application first starts but it quickly becomes chaos and hard to read.

How do we avoid the mess that manually assembling dependencies together creates? Inversion of Control (IoC) Containers to the rescue.

IoC Containers provide a framework for organizing and managing dependencies inside your application. We will be using Microsoft Unity IoC Container in this example to assemble our dependency chain.

Let’s take a look at some code:

public static void AssembleDependencies()
{
    var unityContainer = new UnityContainer();

    unityContainer.RegisterInstance<ILog>(new Log());
    
    unityContainer.RegisterType<IA, A>(
        new InjectionConstructor(
            container.Resolve<ILog>()
        )
    );
}

RegisterType tells Unity to create a new instance of the concrete class anytime one is needed. In our example if A get's used multiple times Unity will create new instances each time.

RegisterIntance tells Unity to create and use only one instance for a specific lifetime. More often then not you will only need one instance of a logger (Log) for your application so here RegisterInstance only creates one.

This is kind of like the difference between using a Factory design pattern and a Singleton design pattern:
  • Factory is a creational pattern which uses factory methods to deal with the problem of creating objects without specifying the exact class of object that will be created.
  • Singleton is a design pattern that restricts the instantiation of a class to one object.

RegisterType is a kind of Factory pattern with more capability. You use it when a new instance of a class is needed each time that class is used.

RegisterInstance is a kind of Singleton pattern with more capability. You use it when you only want one instance of a class in the whole application. However, you can also tell Unity when you want to expire the class and create a new one as well.

unityContainer.RegisterInstance<ILog>(new Log());

I used RegisterInstance for Log. In most applications you will only need one instance of a Logger. In this case we create the instance of Log and associate it with ILog. Now anytime unity is asked to resolve ILog it will create a concrete implementation of Log and return it as ILog.

unityContainer.RegisterType<IA, A>(
    new InjectionConstructor(
        container.Resolve<ILog>()
    )
);

I used RegisterType for A. Since A will be created more than once we want to give Unity instructions about what to inject into it and not actually create an instance now. InjectionConstructor is what keeps track of what dependencies should be injected into the constructor of A.

Note that we can now inject A in another class by resolving IA:

new InjectionConstructor(
    container.Resolve<IA>()
)

We've has only scratched the surface of Microsoft Unity IoC capabilities but I hope I've provided you a good start to developing more manageable code bases.

Thursday, March 26, 2015

Web API 2 Message Handlers and Filters

Message Handlers

Custom Message Handlers


Custom message handlers are used for cross cutting concerns like logging or authorization.

Lets look at the Message Handler Pipeline we will be building:



HttpServer gets the initial message from the network. FooHandler and LogEverythingHandler are custom handlers created for our cross cutting concerns. HttpRoutingDispatcher functions as the last handler in the list responsible for sending the message to the HttpControllerDispatcher. The HttpControllerDispatcher then finds the appropriate controller and sends the message to it.

Let’s take a look at some code:

public class FooHandler : DelegatingHandler
{
    private readonly IBarProvider _provider;

    public FooHandler(IBarProvider provider)
    {
        _provider = provider;
    }

    protected async override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        System.Threading.CancellationToken cancellationToken)
    {           
        if (_provider.DoesRequestContainBar(request))
        {
            return await base.SendAsync(request, cancellationToken);
        }

        var response = request.CreateErrorResponse(HttpStatusCode.BadRequest, 
            "Request does not contain bar.");

        return response;
    }
}

The following 3 rules need to be followed to create a custom message handler:
  1. Your class must inherit from the DelegatingHandler class.
  2. You must implement the SendAsync class.
  3. You must call base.SendAsynch to pass the message to the next custom message handler.

If we wanted to we could create another class to log every request to our API.

In order to tell Web API that we want these custom message handlers added to the pipeline we need to add some code the WebApiConfig class.

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.MessageHandlers.Add(new FooHandler());
        config.MessageHandlers.Add(new LogEverything());
    }
}

Execution Order

With our custom message handlers added to the pipeline the following becomes the order of execution:
  • HttpServer
  • FooHandler
  • LogEverything
  • HttpRoutingDispatcher
  • HttpControllerDispatcher

This illustrates how custom message handlers give us the ability to apply general rules to all the messages coming into our API’s.

Filters

This quote beautifully illustrates the difference between Handlers and Filters:

“The major difference between their two is their focus. Message Handlers are applied to all HTTP requests. They perform the function of an HTTP intermediary. Filters apply only to requests that are dispatched to the particular controller/action where the filter is applied.”

Filters get applied to the individual controllers using attribute tags above a class or method.

Global Exception Filters

We can add exception filters to controller methods.

Let’s take a look at the following Filter and corresponding Controller.

public class UnauthorizedFilterAttribute : ExceptionFilterAttribute
{
    public override void OnException(HttpActionExecutedContext context)
    {
        if (context.Exception is UnauthorizedException)
        {
            context.Response =
                new HttpResponseMessage(HttpStatusCode.Unauthorized);
        }
    }
}

public class ProductsController : ApiController
{
    [UnauthorizedFilterAttribute]
    public Baz GetBaz(int id)
    {
        throw new UnauthorizedException("This method is not authorized.");
    }
}

In the above example UnauthorizedFilterAttribute is decorating ProductsController’s GetBaz method.

This gives us a clean separation between exception handling and controller logic. This also allows us to reuse the exception filter.

However, we can also make our exception filters global.

public sealed class GlobalExceptionFilterAttribute : ExceptionFilterAttribute
{
    public override void OnException(
        HttpActionExecutedContext httpActionExecutedContext)
    {
        if (httpActionExecutedContext != null)
        {
            if (httpActionExecutedContext.Exception is FooException)
            {
                var fooException = 
                    httpActionExecutedContext.Exception as FooException;
                httpActionExecutedContext.Response = 
                    new HttpResponseMessage(fooException.Code)
                {
                    Content = new StringContent(fooException.Message)
                };
            }
            else
            {
                httpActionExecutedContext.Response =
                    new HttpResponseMessage(
                        HttpStatusCode.InternalServerError)
                {
                    Content = new StringContent("There was a problem!")
                };
            }
        }
    }
}

Our GlobalExceptionFilterAttribute inherits from the ExceptionFilterAttribute. It also has only one overridden method named OnException. This class handles the response that should be sent back to the client.

public sealed class GlobalLogFilterAttribute : ExceptionFilterAttribute
{
    public override void OnException(
        HttpActionExecutedContext httpActionExecutedContext)
    {
        if (httpActionExecutedContext != null)
        {
            Logger logger = new Logger();
            logger.Error("There was an exception: {0}",
                httpActionExecutedContext.Exception.Message);
        }
    }
}

In the case of the GlobalLogFilterAttribute we can handle logging errors.

We now have a mechanism to:
  • Log exceptions at the controller level
  • Handle Errors that arise from the controller level

If all of our controller methods have the same logging and client exception response needs we've saved lots of coding. We've also ensured that fixes to the logging and client exception response don’t get partially implemented across only some controllers.

How do these “Global” exceptions get applied?

public static class WebApiConfig
{
    public static void Register(HttpConfiguration config)
    {
        config.MapHttpAttributeRoutes();

        config.Filters.Add(new GlobalLogFilterAttribute());
        config.Filters.Add(new GlobalExceptionFilterAttribute());
    }
}

Inside the WebApiConfig class the HttpConfiguration has a global filter list that can have custom Filters added to it.