Mocking ASP.NET providers

When playing around with ASP.NET membership, I found myself in a situation where I wanted to mock the ASP.NET Providers. This is something the design of providers makes non-trivial. Mark Seemann summarises: “Since a Provider creates instances of interfaces based on XML configuration and Activator.CreateInstance, there’s no way to inject a dynamic mock.”. See Provider is not a pattern.

I had a look around to see what others were doing. I found a post, Mocking membership provider, which proposes adding mocked providers to the provider collection dynamically. It seems like an elegant solution, but I couldn’t get it to work for me after a little playing.

In the end, I came up with a solution that is not the most elegant, but is very easy to use and to understand.

I create an implementation of each provider I want to. The provider contains a mock of that provider type. Each method and property of my provider implementation forwards to the mock the implementation provides. The mock is accessible via a static method of the provider implementation, so that test code can interact with it.

An example implementation:

public class TestRoleProvider : RoleProvider
{
	public static void ResetMock()
	{
		Mock = new Mock<RoleProvider>();
	}

	public static Mock<RoleProvider> Mock { get; private set; }

	#region RoleProvider implementation

	public override void AddUsersToRoles(string[] usernames, string[] roleNames)
	{
		Mock.Object.AddUsersToRoles(usernames, roleNames);
	}

	public override string ApplicationName
	{
		get { throw new NotImplementedException(); }
		set { throw new NotImplementedException(); }
	}

	// Other implementations omitted
}

Note the static methods controlling the mock at the top. Note also that I’ve simply implemented all methods and properties of RoleProvider as not implemented using Visual Studio tooling, and then updated the implementations to forward calls to my mock as I need.

Wiring up the provider framework to use this implementation is easy. Just add the following config to the app.config of your unit test project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
	<system.web>
		<roleManager defaultProvider="TestRoleProvider" enabled="true">
			<providers>
				<add name="TestRoleProvider"
					 type="TestProjectAssemblyName.TestRoleProvider, TestProjectAssemblyName" />
			</providers>
		</roleManager>
	</system.web>
</configuration>

Test code utilising this mock looks like the following:

[TestInitialize]
public void TestInitialize() 
{
	TestRoleProvider.ResetMock();
}
		
[TestMethod]
public void ReturnsNothingWhenNoUsersExist()
{
	var roles = new string[] { };
	TestRoleProvider.Mock.
		Setup(m => m.GetAllRoles()).
		Returns(roles);

	var result = new GetAllRolesQuery().Execute();

	Assert.IsTrue(!result.Any());
}

Generating multiple files from one T4 template

In the previous posts about T4 I firstly drove T4 generation from EF entity definitions, then used this to make EF POCO classes with certain properties implement an interface. Please read these posts before reading this one – in particular the code in this post refers to code from the previous one.

In this post, I’ll extend what I’ve already built to handle multiple interfaces, and to generate a single file per interface.

For this example, I’m going to use two interfaces.

public interface IUserNameStamped
{
    string UserName { get; set; }
}

public interface ILookup
{
    string Code { get; set; }

    string Description { get; set; }
}

I want my EF POCOs to implement IUserNameStamped if they have a UserName property, and ILookup if they have a Code and Description property. I want the IUserNameStamped code in a file IUserNameStamped.cs, and ILookup code in a file ILookup.cs.

By default, a T4 template will generate a single file with the same name as the the template, and the extension defined by the <#@ output #> directive. The EntityFrameworkTemplateFileManager, used by EF to generate a file per entity, is the secret to generating multiple files from a single template.

The other change needed to the T4 code we already have is to break it into reusable methods that can be shared for each entity.

The method I’ve defined to generate an file for a given interface is CreateInterfaceFile, shown here with support classes.

<#+
void CreateInterfaceFile(EntityFrameworkTemplateFileManager fileManager,  
	CodeGenerationTools code,
	EdmItemCollection itemCollection,
	string namespaceName, 
	Action interfaceWriter, 
	string interfaceName, 
	params string[] requiredProperties)
{
    fileManager.StartNewFile(interfaceName + ".cs");
	BeginNamespace(namespaceName, code);
	interfaceWriter();
	var entities = GetEntitiesWithPropertyOrRelationship(itemCollection,
		requiredProperties);
	foreach (EntityType entity in entities.OrderBy(e => e.Name))
	{
		WriteInterfaceImplementation(entity.Name, interfaceName);
	}
	EndNamespace(namespaceName);
}
#>
<#+
void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
	EdmItemCollection itemCollection, 
	params string[] requiredProperties)
{
	return itemCollection.GetItems<EntityType>().Where(entity => 
		EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship
	(EntityType entity, params string[] requiredProperties)
{
	return requiredProperties.All(
		requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
		|| entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

void WriteInterfaceImplementation(string entityName, string interfaceName)
{
#>

public partial class <#=entityName#> : <#=interfaceName#>
{
}
<#+
}

The parameters of CreateInterfaceFile:

  • The first three parameters are T4 and EF classes instantiated at the top of the template and passed in.
  • namespaceName is also provided by T4 – the namespace the interface and classes will belong to.
  • interfaceWriter is a action that writes out the definition of the interface itself.
  • interfaceName is the name of the interface.
  • requiredProperties is an array of all the properties a class must have to be considered to implement the interface.

The logic is very simple

  • The EntityFrameworkTemplateFileManager is used to start a file for the interface – all output now goes to this file until the next time StartNewFile is called.
  • The namespace is written.
  • The declaration of the interface is written.
  • Entities matching this interface are found using GetEntitiesWithPropertyOrRelationship (as explained in the previous blog post.
  • An partial implementation of the class for each matching entity is written, with no content, simply stating that the class implements the interface in question.
  • The namespace is closed.

That’s about all there is to it. Once again, an extension to this code to match entity properties by type as well as name is left as an exercise to the reader.

Here is full source code:

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#><#@
 output extension=".cs"#><#

string inputFile = @"OticrsEntities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
	CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

EntityFrameworkTemplateFileManager fileManager = 
	EntityFrameworkTemplateFileManager.Create(this);
WriteHeader(fileManager);

#>
// Default file generated by T4. Generation cannot be prevented. Please ignore.
<#

CreateInterfaceFile(fileManager, 
	code,
	itemCollection, 
	namespaceName,
	WriteILookupInterface,
	"ILookup",
	"ContractorCode", "Description");

CreateInterfaceFile(fileManager, 
	code,
	itemCollection, 
	namespaceName,
	WriteIUserNameStampedInterface,
	"IUserNameStamped",
	"UserName");
	
fileManager.Process(true);

#>
<#+
void CreateInterfaceFile(EntityFrameworkTemplateFileManager fileManager,  
	CodeGenerationTools code,
	EdmItemCollection itemCollection,
	string namespaceName, 
	Action interfaceWriter, 
	string interfaceName, 
	params string[] requiredProperties)
{
    fileManager.StartNewFile(interfaceName + ".cs");
	BeginNamespace(namespaceName, code);
	interfaceWriter();
	var entities = GetEntitiesWithPropertyOrRelationship(itemCollection, 
		requiredProperties);
	foreach (EntityType entity in entities.OrderBy(e => e.Name))
	{
		WriteInterfaceImplementation(entity.Name, interfaceName);
	}
	EndNamespace(namespaceName);
}
#>
<#+
void WriteHeader(EntityFrameworkTemplateFileManager fileManager, 
	params string[] extraUsings)
{
    fileManager.StartHeader();
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

using System.Collections.Generic;

<#=String.Join(String.Empty, extraUsings.
		Select(u => "using " + u + ";" + Environment.NewLine).
		ToArray())#>
<#+
    fileManager.EndBlock();
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
	EdmItemCollection itemCollection, 
	params string[] requiredProperties)
{
	return itemCollection.GetItems<EntityType>().Where(entity => 
		EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(EntityType entity, 
	params string[] requiredProperties)
{
	return requiredProperties.All(requiredProperty => 
		entity.Properties.Any(property => property.Name == requiredProperty)
		|| entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

void WriteInterfaceImplementation(string entityName, string interfaceName)
{
#>

public partial class <#=entityName#> : <#=interfaceName#>
{
}
<#+
}

void WriteILookupInterface()
{
#>
/// <summary>
/// A lookup entity, that can be looked up by a ContractorCode
/// </summary>
public interface ILookup
{
    string ContractorCode { get; set; }
	
	string Description { get; set; }
}
<#+
}

void WriteIUserNameStampedInterface()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}
<#+
}
#>

Duck typing Entity Framework classes using T4 Templates

Duck typing is an interesting concept, and alien to C# generally. But using the techniques of my previous post about T4 and Entity Framework, it is possible to have your entities implement interfaces if they have the required properties, resulting in behaviour similar to duck typing. Please read the previous blog post before reading this one.

The previous blog post gives us code to implement interfaces for each entity in an object model. In order to provide “duck typing”, we will extend this to only implement the interface for an entity if that entity has the properties of the interface.

Fortunately System.Data.Metadata.Edm.EntityType gives us the ability to inspect the properties of an entity. For my purposes, I only check for properties by name, as I control my database and would never have the same column name with two different data types. Extension of this code to check property types as well as names is left as an exercise for the reader.

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
    EdmItemCollection itemCollection, params string[] requiredProperties)
{
    return itemCollection.GetItems<EntityType>().
        Where(entity => EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(
    EntityType entity, params string[] requiredProperties)
{
    return requiredProperties.All(
        requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
        || entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

Pretty simple stuff. EntityHasPropertyOrRelationship checks both the Properties (properties relating to simply database columns), and NavigationProperties (properties relating to foreign key relationships) for properties with the required names. If our entity has all the required properties, it’s a match.

GetEntitiesWithPropertyOrRelationship uses EntityHasPropertyOrRelationship to retrieve all the entities that have the required properies from our itemCollection.

I’ve blogged about further extending the template to handle multiple interfaces, with one file per interface.

Here’s the full code of the example from the last blog post, updated so entities only implement IUserNameStamped if they actually have a column called UserName.

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"Entities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
    CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>
<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, extraUsings.
    Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
/// <remarks>
/// All OTICRS entities should have a username. If any entity fails to implement
/// this interface, it means the table needs the UserName column added to it.
/// </remarks>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

void WriteEntitiesWithInterface(
    EdmItemCollection itemCollection)
{
    foreach (EntityType entity in 
        GetEntitiesWithPropertyOrRelationship(itemCollection, "UserName").
        OrderBy(e => e.Name))
    {
        WriteEntityWithInterface(entity.Name);
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
    EdmItemCollection itemCollection, params string[] requiredProperties)
{
    return itemCollection.GetItems<EntityType>().Where(
        entity => EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(
    EntityType entity, params string[] requiredProperties)
{
    return requiredProperties.All(
        requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
        || entity.NavigationProperties.All(property => property.Name == requiredProperty));
}

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Using T4 Templates to extend your Entity Framework classes

A set of entities I’m using with Entity Framework (I’m using EF POCO) have common properties implying commonality between the entities. I didn’t want to use any form of inheritance within my object model to express this commonality, but I did wish to have the entity classes implement common interfaces. It’s easy to do this because entities are partial classes. Say for example all my entities have a string property UserName, I can define an interface to express this, and then have a partial implemention of the class that implements the interface.

public interface IUserNameStamped
{
    string UserName { get; set; }
}
    
public partial class Entity1 : IUserNameStamped
{
}
    
public partial class Entity2 : IUserNameStamped
{
}

So the POCO T4 template generates the “main” class definition for each entity, with all it’s properties, and then these partial classes extend the class, not adding any new properties or methods, just extending with the fact each class implements the IUserNameStamped interface.

I quickly realised that I could use T4 in a similar manner to the EF POCO T4 template, in order to produce these partial classes automatically.

As I explained in my post about UserName stamping entities as they’re saved, all my entities have a UserName column. So all this template has to do is loop through all the entities in my object model, and write an implementation for each.

The main T4 logic is

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"OticrsEntities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
    CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>

Most of this is cribbed unashamedly from the EF POCO T4 template. Firstly we initialise some variables, the most interesting being itemCollection, which is what allows access to the entity metadata. We then write a header indicate the file is a generated file, start the namespace, write the actual declaration of the IUsernameStamped interface, write a partial class for each entity implementing the interface, and then end the namespace. The specifics of each method are:

<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, 
    extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

I think these three methods are fairly self-explanatory, other than the <# syntax that T4 uses to indicate code and text blocks.

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

Simply generates the interface definition.

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

Iterates through the entities.

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Writes an implementation of the IUserNameStamped interface for each entity.

So you can see it’s fairly simple to use T4 to generate C# code similar to that at the top of this blog post.

I’ve blogged about how I extended this code to make a certain set of entities with common properties implement a common interface

I’ve also blogged about further extending the template to handle multiple interfaces, with one file per interface.

This is the full code of the T4 template:

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"Entities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>
<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Automatically username stamping entities as they’re saved

The application I am currently working on has a requirement to audit which application user last created or updated database records. All tables in the database are required to have an nvarchar column UserName.

I didn’t want this concern to leak into my application. After some investigation I discovered that ObjectContext has the SavingChanges event that would be ideal for my purposes.

So the creation of my ObjectContext becomes

var entities = new MyEntities();
entities.SavingChanges += SetUserNameEvent;

I originally thought that SetUserNameEvent would have to use reflection to obtain and set the UserName property. However I found a way to use T4 to generate code resulting in all entities with the UserName property implementing a common interface (IUserNameStamped). I’ve written a blog post talking about about the T4 code.

So with all my entities implementing this common interface, SetUserNameEvent is then

/// <summary>
/// Sets the user name of all added and modified 
/// entities to the username provided by
/// the <see cref="UserNameProvider"/>. 
/// </summary>
private void SetUserNameEvent(object sender, EventArgs e)
{
    Contract.Requires<ArgumentException>(
        sender is ObjectContext, 
        "sender is not an instance of ObjectContext");
    var objectContext = (ObjectContext)sender;
    foreach (ObjectStateEntry entry in 
        objectContext.ObjectStateManager.GetObjectStateEntries(
            EntityState.Added | EntityState.Modified))
    {
        var stamped = entry.Entity as IUserNameStamped;
        Contract.Assert(stamped != null, 
            "Expected all entities implement IUserNameStamped");
        stamped.UserName = UserNameProvider.UserName;
    }
}

So here, we get all added and modified entries from the ObjectStateManager, and use these to obtain the entities and set their UserName. UserNameProvider is an abstraction used as I have several applications utilising my object context, each with a different way to obtain the current application user. Note that my code is using Code Contracts.

One complication I’ve found is with child entities. Sometimes, I’ve found I have to add the child entity both to its parent object and to the object context, but sometimes it’s enough to simply add the child entity to it’s parent object. That is:

var entities = ObjectContextFactory.GetObjectContext();
var childEntity = new ChildEntity();
entities.ParentEntities.First().ChildEntities.Add(childEntity);
// entities.ChildEntities.AddObject(childEntity);
entities.SaveChanges()
// Sometimes UserName will not get set without the commented line above, 
// resulting in a NOT NULL constraint violation

I’ve found no rhyme or reason as to why the addition to the ObjectContext is only sometimes required, I’d love hints as to why this is.

Note I’m actually using the unit of work pattern for my application, and I use a unit of work factory rather than an object context factory, but that’s irrelevant to the use of the SavingChanges event in this fashion.

Extensible processing classes using reflection

I recently wanted to build an extensible set of processing classes. Each class can process certain objects it is provided.

I decided the simplest way to do this was to create an processor interface. The set of processing classes then is all classes that implement this interface. I use reflection to then find all the processors: that is, all implementations of the processor interface.

Assuming all processing is done on the object itself, the processor interface looks like this

public interface IProcessor
{
	bool CanProcess(ITarget target);

	void Process(ITarget target);
}

One of the processor implementations could look something like this

public class UpdateTotalProcessor : IProcessor
{
	public bool CanProcess(ITarget target)
	{
		return target.Items.Any();
	}

	public void Process(ITarget target)
	{
		target.Total = target.Items.Sum(item => item.Value);
	}
}

To utilise the processors, you’d end up with code similar to the following

private static readonly IEnumerable<IProcessor> Processors = InstancesOfMatchingTypes<IProcessor>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public void Process(ITarget target)
{
	foreach(IProcessor processor in Processors.Where(p => p.CanProcess(target)))
		processor.Process(target);
}

Note that with this implementation, multiple processors can potentially match, and therefore process, a target. Also note that there is no ordering; adding ordering would be an easy extension.

With this scheme, adding new processors is dead easy. Simply add a new implementation of IProcessor to the assembly, and it will be automatically picked up and used.

Providing classes that derive one type from another is also simple using this scheme.

public interface IDeriver
{
	bool CanDerive(ITarget target);

	IDerived Derive(ITarget target);
}

public class SampleDeriver : IDeriver
{
	public bool CanDerive(ITarget target)
	{
		return true;
	}

	public IDerived Derive(ITarget target)
	{
		return new Derived(target);
	}
}

Obviously applying more than one deriver makes no sense in this context

private static readonly IEnumerable<IDeriver> Derivers = InstancesOfMatchingTypes<IDeriver>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public IDerived Derive(ITarget target)
{
	IDeriver deriver = Derivers.FirstOrDefault(deriver => deriver.CanDerive(target));
	return deriver == null
		? null
		: deriver.Derive(target);
}

Once again ordering is undefined so if you have your derivers defined in such a way that more than one matches, you will end up with unpredictable results.

Updating the registry using .NET and LogParser

I have discovered a need to be able to search and replace registry values. I originally thought about using Powershell but after reading this blog post about Powershell performance with the registry, I decided to use .NET. I quickly encountered the idea of using LogParser to read the registry at high speed, and decided this was a fruitful avenue.

The background to this need is that when you use a custom profile location, you can only use Chrome as your default browser by editing the registry. I did this manually once. Then when I found the keys had reset themselves, I decided coding something to update the registry for me would be interesting.

The first stage was to get the LogParser COM interop built. This was pleasantly easy. As simple as running tlbimp "C:\Program Files (x86)\Log Parser 2.2\LogParser.dll" /out:Interop.MSUtil.dll, adding the DLL as a reference to my project, adding using statements to Program.cs, and then writing some code. I started by getting the search going.

using System;
using System.Linq;
using System.Runtime.InteropServices;
using LogQuery = Interop.MSUtil.LogQueryClass;
using RegistryInputFormat = Interop.MSUtil.COMRegistryInputContextClass;
using RegRecordSet = Interop.MSUtil.ILogRecordset;
using System.Diagnostics;
using Microsoft.Win32;
using System.Collections.Generic;

namespace FreeSpiritSoftware.ChromeRegistryCustomProfile
{
	public class Program
	{
		private const string DefaultChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" -- ""%1""";

		public static void Main(string[] args)
		{
			RegRecordSet rs = null;

			Stopwatch stopWatch = new Stopwatch();
			stopWatch.Start(); 
			try
			{
				LogQuery qry = new LogQuery();
				RegistryInputFormat registryFormat = new RegistryInputFormat();
				string query = string.Format(@"SELECT Path, ValueName from \HKCR, \HKCU, \HKLM, \HKCC, \HKU where Value = '{0}'", DefaultChromeCall);
				rs = qry.Execute(query, registryFormat);
				for (; !rs.atEnd(); rs.moveNext())
				{
					string path = rs.getRecord().toNativeString(0);
					string valueName = rs.getRecord().toNativeString(1);
					Console.WriteLine(path);
					Console.WriteLine(valueName);
					Console.WriteLine("--");
				}
			}
			finally
			{
				rs.close();
			}
			stopWatch.Stop();
			Console.WriteLine(stopWatch.Elapsed.TotalSeconds + " seconds");
			Console.ReadKey(false);
		}
	}
}

You’ll see I explicitly reference the five registry keys in the FROM statement of the query I give LogParser, even though I’m searching the whole registry. This is because when I tried FROM /, I got two results per root key of the registry, one using the abbreviated root key name, one using it’s full name (e.g. I’d get HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command and HKCR\ChromeHTML\shell\open\command).

So once I had the code above working, the next step was to actually access and update the keys using Microsoft.Win32.Registry. This proved to be more complex than I had expected as (a) you have to access the root keys as static properties of Registry, and (b) from a particular key, you can only access its immediate subkeys. I’m sure there are libraries that make matters simpler, but working around was easy enough.

To deal with the root keys, I created a dictionary to use to look up root key abbreviations from LogParser, and return the root key objects. I created a recursive function to move through subkeys to finally access the subkey referenced by a path.

private static readonly IDictionary<string, RegistryKey> RegistryLookup = new Dictionary<string, RegistryKey>
{
	{ "HKCR", Registry.ClassesRoot },
	{ "HKCU", Registry.CurrentUser },
	{ "HKLM", Registry.LocalMachine },
	{ "HKCC", Registry.CurrentConfig },
	{ "HKU", Registry.Users },
};

private static RegistryKey GetSubKey(IEnumerable<string> splitPath)
{
	RegistryKey rootKey = RegistryLookup[splitPath.First()];
	return GetSubKey(rootKey, splitPath.Skip(1));
}

private static RegistryKey GetSubKey(RegistryKey key, IEnumerable<string> splitPath)
{
	var theRest = splitPath.Skip(1);
	return theRest.Any()
		? GetSubKey(key.OpenSubKey(splitPath.First()), splitPath.Skip(1))
		: key.OpenSubKey(splitPath.First(), writable: true);
}

So for HKCR\ChromeHTML\shell\open\command and HKCR\ChromeHTML\shell\open\command, it’ll split off HKCR and get the root key, call GetSubKey(Registry.ClassesRoot, { "ChromeHTML", "shell", "open", "command" }) which will get the ChromeHTML subkey within HKCR, and call GetSubKey(ChromeHTML, { "shell", "open", "command" }), and so on, until it calls with GetSubKey(open, { "command" }), and which point recursion ends, and the “command” key is opened writable and returned.

From this point things were easy. The only other complication was that LogParser represents the default key as "(Default)", whereas Microsoft.Win32.Registry represents it as string.Empty.

The final code looks like this. Parameterisation, tidying, etc is left as an exercise for the reader.

using System;
using System.Linq;
using System.Runtime.InteropServices;
using LogQuery = Interop.MSUtil.LogQueryClass;
using RegistryInputFormat = Interop.MSUtil.COMRegistryInputContextClass;
using RegRecordSet = Interop.MSUtil.ILogRecordset;
using System.Diagnostics;
using Microsoft.Win32;
using System.Collections.Generic;

namespace FreeSpiritSoftware.ChromeRegistryCustomProfile
{
	public class Program
	{
		private const string DefaultChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" -- ""%1""";
		private const string ReplacementChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" --user-data-dir=""E:\settings\chrome-profiles""  -- ""%1""";
		private static readonly char[] PathSeparator = new[] { '\\' };
		private static readonly IDictionary<string, RegistryKey> RegistryLookup = new Dictionary<string, RegistryKey>
        {
            { "HKCR", Registry.ClassesRoot },
            { "HKCU", Registry.CurrentUser },
            { "HKLM", Registry.LocalMachine },
            { "HKCC", Registry.CurrentConfig },
            { "HKU", Registry.Users },
        };

		public static void Main(string[] args)
		{
			RegRecordSet rs = null;

			Stopwatch stopWatch = new Stopwatch();
			stopWatch.Start();
			try
			{
				LogQuery qry = new LogQuery();
				RegistryInputFormat registryFormat = new RegistryInputFormat();
				string query = string.Format(@"SELECT Path, ValueName from \HKCR, \HKCU, \HKLM, \HKCC, \HKU where Value = '{0}'", DefaultChromeCall);
				rs = qry.Execute(query, registryFormat);
				for (; !rs.atEnd(); rs.moveNext())
				{
					string path = rs.getRecord().toNativeString(0);
					string valueName = rs.getRecord().toNativeString(1);
					if (valueName == "(Default)")
						valueName = string.Empty;
					Console.WriteLine(path);
					Console.WriteLine(valueName);
					String[] splitPath = path.Split(PathSeparator);
					RegistryKey key = GetSubKey(splitPath.Take(splitPath.Length));
					Console.WriteLine(key.GetValue(valueName));
					key.SetValue(valueName, ReplacementChromeCall);
					Console.WriteLine(key.GetValue(valueName));
					Console.WriteLine("--");
				}
			}
			finally
			{
				rs.close();
			}
			stopWatch.Stop();
			Console.WriteLine(stopWatch.Elapsed.TotalSeconds + " seconds");
			Console.ReadKey(false);
		}

		private static RegistryKey GetSubKey(IEnumerable<string> splitPath)
		{
			RegistryKey rootKey = RegistryLookup[splitPath.First()];
			return GetSubKey(rootKey, splitPath.Skip(1));
		}

		private static RegistryKey GetSubKey(RegistryKey key, IEnumerable<string> splitPath)
		{
			var theRest = splitPath.Skip(1);
			return theRest.Any()
				? GetSubKey(key.OpenSubKey(splitPath.First()), splitPath.Skip(1))
				: key.OpenSubKey(splitPath.First(), writable: true);
		}
	}
}

(Note: I’m aware that there are aliases within the registry so I’m performing duplicate searches, I was happy to just use a brute-force search).

Tidy IEqualityComparer with GenericEqualityComparer

Whilst looking through a codebase, I saw implementations of IEqualityComparer<>. After thinking to myself that the need to create an entire implementation of IEqualityComparer<> per use creates quite a bit of boilerplate for such a small amount of signal, I realised that creating a generic implementation of IEqualityComparer<> that takes a definition of equality in its constructor would be very simple.

public class GenericEqualityComparer<T> : IEqualityComparer<T>
{
	private readonly Func<T, T, bool> mEqualsFunc;
	private readonly Func<T, int> mGetHashCodeFunc;

	public GenericEqualityComparer(Func<T, T, bool> equalsFunc, 
		Func<T, int> getHashCodeFunc)
	{
		if (equalsFunc == null)
			throw new ArgumentNullException("equalsFunc");
		if (getHashCodeFunc == null)
			throw new ArgumentNullException("getHashCodeFunc");

		mEqualsFunc = equalsFunc;
		mGetHashCodeFunc = getHashCodeFunc;
	}

	public bool Equals(T x, T y)
	{
		return mEqualsFunc(x, y);
	}

	public int GetHashCode(T obj)
	{
		return mGetHashCodeFunc(obj);
	}
}

Creating and using an instance of this class is as simple as

public class TestClass
{
	private static readonly GeneralEqualityComparer<Foo> mFooComparer = 
		new GeneralEqualityComparer<Foo>(
			(x, y) => x.Id == y.Id,
			obj => obj.Id);

	public void GetDistinctFoos(IEnumerable<Foo> foos)
	{
		return foos.Distinct(mFooComparer);
	}
}

However, I was a bit embarrassed when I told my boss, Tony Beveridge, about this great use of generics and Funcs I had thought of, and he told me he had actually implemented exactly the same class some months ago.

Its worth noting that EqualityComparer<T>.Default provides a default implementation using the Equals() and GetHashCode() functions of T.

If you wanted to extend GenericEqualityComparer so you don’t have to provide an implementation for GetHashCode(), you can default mGetHashCodeFunc to always return zero. This will force the Equals function to always be called.

IEnumerable, ReadOnlyCollection, and the missing interface

I’ve been thinking on and off about the appropriate return signature for a method that returns an immutable list of objects, sparked off by reading Eric Lippert’s article, Arrays considered somewhat harmful, and my belief that the value of functional program and growth of parallelism means that immutability is desirable most of the time.

However, once you decide to return an immutable collection, what type do you return?

IEnumerable is not really appropriate. The problem is that an IEnumerable may possibly be only evaluable a single time, or may cost for every evaluation of it you perform. This means that you end up with consumers of your method having to use ToList() or ToArray() to flatten the IEnumerable before consuming it, which is wasteful when your method is returning a bounded collection.

So the only choice you have with .NET is ReadOnlyCollection. Which is okay, but not ideal, I believe.

Firstly this involves specifying a return signature as a concrete type. I prefer my method signatures to be interfaces when primitives are not being used, so they only specify behavour. This also means that you can’t return an object that doesn’t use ReadOnlyCollection as a base class.

The second issue is that ReadOnlyCollection implements ICollection and IList. Whilst the implementation of methods such as Add are explicit, the fact ReadOnlyCollection implements interfaces with methods that are invalid for it creates a class of bugs only findable at run time. Have a look at the following code.

public ReadOnlyCollection<object> GetReadOnly()
{
	ReadOnlyCollection<object> readOnly = new List<object>().AsReadOnly();
}

public void ShowIssue()
{
	ReadOnlyCollection<object> readOnly = GetReadOnly();
	// The next line prevented at compile time
	// readOnly.Add(new object());

	// However this code compiles, unfortunately
	IList<object> iList = GetReadOnly()
	iList.Add(new object()); // Fails with an exception at runtime
}

I think that it would have been sensible for .NET to have had an interface that inherits IEnumerable, that represents a readonly bounded collection, called something like IReadOnlyCollection. It would have a Count and allow read only access to the elements by index. ICollection and IList would both inherit this interface, and ReadOnlyCollection would be the implementation of it.

Update: Firstly, this article doesn’t really cover the differences between immutable and read only. The ReadOnlyCollection doesn’t provide any methods to change the collection membership. However ReadOnlyCollection is only a wrapper around the List, and it does not guarantee that the underlying list is not changed.

Links that may be of interest:

Using Policy Injection and Attributes to preempt calls to non-functioning systems

It’s a waste of processor cycles and user time to make web service calls to systems that are not currently functioning. I was involved in building a solution that allows code that depends on non-functioning systems to be skipped entirely. Code simply needs to be attributed with the systems it uses. Then a policy injection handler will throw an exception without even calling that code if a system is known to be unavailable.

I document the parts of the system I built in this article. The moving parts of involved in this solution are:

  • Agents. The agents are the classes that make web service calls to external systems.
  • FunctionalArea attributes. Agent interfaces are marked up with these attributes to indicate dependencies on external systems.
  • FunctionalAreaUnavailableException – thrown to indicate an agent method call has been made involving an unavailable system.
  • SystemStatusAgent. This service keeps track of the unavailability of systems, by receiving information from the application when a FunctionalAreaUnavailableException is thrown, and through its own monitoring. I don’t document it in this blog post.
  • The InterceptorBehavior. Policy injection causes this to be ran before each agent method call. It throws a FunctionalAreaUnavailableException when an attributed agent has a web service exception, or instead of a method call involving a system the SystemStatusAgent considers unavailable.
  • Global exception handling. Catches FunctionalAreaUnavailableExceptions, notifies the SystemStatusAgent of them, and shows the user an error indicating the system they are trying to work with is currently unavailable. I don’t document it in this blog post.

I was able to policy inject all Agents as they were constructed in our AgentFactory. I wished to use configuration based injection. But (if my memory serves me right) with Unity 2.0 policy injection, you can’t use configuration to generated an injected object that is of a concrete type. I had to specify interception behaviours (Unity synonym for policy handler) in code to use the required interceptor, TransparentProxyInterceptor.

PolicyInjectionHelper makes injection simple for us.

public class PolicyInjectionHelper
{
	private ReadOnlyCollection<Type> mInterceptorTypes;

	/// <summary>
	/// Construct a new interception helper that will provide objects with 
	/// policy injection, using interceptors of the given types, in the
	/// given order.
	/// </summary>
	/// <param name="interceptorTypes">
	/// Type of the interceptors - all must derive from IInterceptionBehavior
	/// </param>
	public PolicyInjectionHelper(IEnumerable<Type> interceptorTypes)
	{
		DBC.Assert(
			interceptorTypes.All(typeof(IInterceptionBehavior).IsAssignableFrom),
			"All interceptors must derive from IInterceptionBehavior");
		mInterceptorTypes = interceptorTypes.ToList().AsReadOnly();
	}

	/// <summary>
	/// Derive a policy inject object from the provided object
	/// </summary>
	public T PolicyInject<T>(T obj) where T : class
	{
		T result = obj;
		if (mInterceptorTypes.Any())
			result = Intercept.ThroughProxy<T>(
				result,
				new TransparentProxyInterceptor(),
				ConstructInterceptorInstances());
		return result;
	}

	private IEnumerable<IInterceptionBehavior> 
		ConstructInterceptorInstances()
	{
		return mInterceptorTypes.
			Select(type => Activator.CreateInstance(type) as IInterceptionBehavior);
	}
}

I utilised this in our AgentFactory as this simplified and abbreviated code shows. The PolicyInjectionHelper means that whilst the interception behaviours couldn’t be specified in config, they are easy to see in code (RequiresFunctionalAreaPolicyHandler below).

public class AgentFactory {
	private static readonly PolicyInjectionHelper mPolicyInjectionHelper = 
		mFunctionalAreaPolicyHandlerEnabled
		? new PolicyInjectionHelper(
			new List<Type> { typeof(RequiresFunctionalAreaPolicyHandler) })
		: new PolicyInjectionHelper(Enumerable.Empty<Type>());
				
	// Snipped singleton code

	// Real construction is more complex
	public T ConstructAgent<T>()
		where T : IBaseAgent
	{
		T agent = GetAgent<T>();
		return mPolicyInjectionHelper.PolicyInject(agent);
	}
}

A pair of FunctionalArea attributes allow agent interfaces and methods to be attributed to indicate dependencies

/// <summary>
/// Decorate agent interfaces and agent interface classes with this attribute to indicate
/// that they require a particular functional area to be available.
/// When decorated methods, or methods within a decorated interface are
/// called, a FunctionalAreaUnavailableException is thrown if
/// the area is unavailable, or if a SoapException occurs during
/// the method
/// </summary>
[AttributeUsage(AttributeTargets.Interface | AttributeTargets.Method, AllowMultiple = false, Inherited = true)]
public class FunctionalAreaRequiredAttribute : Attribute 
{
	public FunctionalAreaRequiredAttribute(FunctionalArea pArea)
	{
		Area = pArea;
	}

	public FunctionalArea Area { get; private set; }

	public bool Equals(FunctionalAreaRequiredAttribute other)
	{
		return !ReferenceEquals(null, other) && 
			   (ReferenceEquals(this, other) || Equals(other.Area, Area));
	}

	public override bool Equals(object obj)
	{
		return !ReferenceEquals(null, obj) &&
			   (ReferenceEquals(this, obj) || Equals(obj as FunctionalAreaRequiredAttribute));
	}

	public override int GetHashCode()
	{
		return Area.GetHashCode();
	}
}

/// <summary>
/// Decorate methods with this attribute to indicate that the functional
/// area requirements of their containing class should be ignored
/// </summary>
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class IgnoreRequiredFunctionalAreaAttribute : Attribute { }

Here is an example of how the attributes are used

[FunctionalAreaRequired(FunctionalArea.SystemX)]
public interface IAnAgent : IBaseAgent
{
	void AMethod();

	[IgnoreRequiredFunctionalArea]
	void NotThisMethod();
}
	
public interface IAnotherAgent : IBaseAgent
{
	[FunctionalAreaRequired(FunctionalArea.SystemY)]
	void AMethod();

	void NotThisMethod();
}

The FunctionalAreaUnavailableException itself is very simple.

public class FunctionalAreaUnavailableException : ApplicationException
{
	private const string MessageTemplate = 
		"Functional area {0} unavailable";

	public FunctionalAreaUnavailableException(FunctionalArea area, 
		Exception ex = null)
		: base(string.Format(MessageTemplate, area), ex)
	{
		Area = area;
	}

	public FunctionalArea Area { get; private set; }
}

I introduced a BaseInterceptionBehavior as I quickly found all my IInterceptionBehavior implementations had commonalities.

public abstract class BaseInterceptionBehavior : IInterceptionBehavior
{
	#region IInterceptionBehavior members

	/// <summary>
	/// Returns a flag indicating if this behavior will actually do anything when invoked.
	/// </summary>
	public bool WillExecute { get { return true; } }

	/// <summary>
	/// Check if an intercepted method invocation should be processed, and process it if
	/// interception is required.
	/// </summary>
	public IMethodReturn Invoke(IMethodInvocation input, 
		GetNextInterceptionBehaviorDelegate getNext)
	{
		return IsInterceptionRequired(input)
			? ProcessInvocation(input, () => getNext()(input, getNext))
			: getNext()(input, getNext);
	}

	/// <summary>
	/// Returns the interfaces required by the behavior for the objects it intercepts. 
	/// </summary>
	public virtual IEnumerable<Type> GetRequiredInterfaces()
	{
		return Enumerable.Empty<Type>();
	}

	#endregion

	/// <summary>
	/// Should this method invocation be intercepted?
	/// </summary>
	protected virtual bool IsInterceptionRequired(IMethodInvocation input)
	{
		// !input.MethodBase.IsSpecialName means properties 
		// aren't intercepted, and also ToString et. al.
		return input.MethodBase.IsPublic && 
			!input.MethodBase.IsSpecialName;
	}

	/// <summary>
	/// Process the intercepted method invocation
	/// </summary>
	protected abstract IMethodReturn ProcessInvocation(IMethodInvocation input, 
		Func<IMethodReturn> processNext);
}

This is the interception behaviour that throws FunctionalAreaUnavailableException as previously described.

/// <summary>
/// Provides agents with functional area enhancements. 
/// If an agent or method is attributed as requiring a functional area:
/// - Call to methods for which the System Status agent considers a 
///   functional area unavailable will throw a 
///   FunctionalAreaUnavailableException exception without the method 
///   being invoked.
/// - SoapExceptions in methods will be wrapped in a 
///   FunctionalAreaUnavailableException for the attributed functional area
/// </summary>
/// <remarks>
/// An assumption is made only methods should have this behaviour - 
/// it uses the standard BaseInterceptionBehavior.IsInterceptionRequired 
/// criteria
/// </remarks>
public class RequiresFunctionalAreaPolicyHandler 
	: BaseInterceptionBehavior
{
	protected override IMethodReturn ProcessInvocation(
		IMethodInvocation input, Func<IMethodReturn> processNext)
	{
		MethodBase method = input.MethodBase;
		FunctionalArea? area = GetFunctionalAreaRequired(method);
		IMethodReturn result;
		if (area.HasValue && !SystemStatus.IsOperational(area.Value))
		{
			// Don't even call the method if
			// the FunctionalArea is unavailable
			result = input.CreateExceptionMethodReturn(
				new FunctionalAreaUnavailableException(area.Value));
		}
		else
		{
			result = processNext();
            Exception ex = result.Exception;
			Type areaExType = typeof(FunctionalAreaUnavailableException);
			if (area.HasValue &&
				ContainsException(ex, typeof(SoapException)) &&
				!ContainsException(ex, areaExType))
			{
				result = input.CreateExceptionMethodReturn(
					new FunctionalAreaUnavailableException(area.Value, 
						ex));
			}
		}

		return result;
	}

	private static FunctionalArea? GetFunctionalAreaRequired(
		MethodBase pMethod)
	{
		return IgnoreRequiredAttributes(pMethod)
			 ? (FunctionalArea?)null
			 : GetAttrs(pMethod).
				 Select(attribute => attribute.Area).
				 SingleOrDefault();
	}
	
	private static IEnumerable<FunctionalAreaRequiredAttribute> GetAttrs(
		MethodBase pMethod)
	{
		var methodAttrs = pMethod.
			GetCustomAttributes<FunctionalAreaRequiredAttribute>();
		var typeAttrs = pMethod.DeclaringType.
			GetCustomAttributes<FunctionalAreaRequiredAttribute>(
				true, true);
		return methodAttrs.Union(typeAttrs);
	}

	private static bool IgnoreRequiredAttributes(MethodBase pMethod)
	{
		return pMethod.
			GetCustomAttributes<IgnoreRequiredFunctionalAreaAttribute>().
			FirstOrDefault() != null;
	}

	private static bool ContainsException(
		Exception pException, Type pSearch)
	{
		return pException != null && 
			(pSearch.IsAssignableFrom(pException.GetType()) ||
			ContainsException(pException.InnerException, pSearch));
	}
}

And that’s that.

Policy Injection has its costs. Apart from the additional complexity that policy injection introduces to your code, the usage of the proxy class introduces some overhead to every usage of a policy injected object. And you should be sure that a requirement is a cross cutting concern – do some reading about Aspect-oriented programming for some ideas about how policy injection should be used.

System availability is a cross-cutting concern for agents, which are defined as the set of classes that provide access to external systems in the application I’m dealing with. And the cost of accessing agents through a proxy is fractional compared to the cost of the actual web service calls involved.

This work resulted in a better user user experience when systems are down by providing fast failure, rather than making users wait for timeouts known to be inevitable. The SystemStatusAgent also provides monitoring of the health of the systems the application depends on. I found Unity policy injection wasn’t entirely intuitive. But I’m pleased with the outcome.