Using T4 Templates to extend your Entity Framework classes

A set of entities I’m using with Entity Framework (I’m using EF POCO) have common properties implying commonality between the entities. I didn’t want to use any form of inheritance within my object model to express this commonality, but I did wish to have the entity classes implement common interfaces. It’s easy to do this because entities are partial classes. Say for example all my entities have a string property UserName, I can define an interface to express this, and then have a partial implemention of the class that implements the interface.

public interface IUserNameStamped
{
    string UserName { get; set; }
}
    
public partial class Entity1 : IUserNameStamped
{
}
    
public partial class Entity2 : IUserNameStamped
{
}

So the POCO T4 template generates the “main” class definition for each entity, with all it’s properties, and then these partial classes extend the class, not adding any new properties or methods, just extending with the fact each class implements the IUserNameStamped interface.

I quickly realised that I could use T4 in a similar manner to the EF POCO T4 template, in order to produce these partial classes automatically.

As I explained in my post about UserName stamping entities as they’re saved, all my entities have a UserName column. So all this template has to do is loop through all the entities in my object model, and write an implementation for each.

The main T4 logic is

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"OticrsEntities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
    CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>

Most of this is cribbed unashamedly from the EF POCO T4 template. Firstly we initialise some variables, the most interesting being itemCollection, which is what allows access to the entity metadata. We then write a header indicate the file is a generated file, start the namespace, write the actual declaration of the IUsernameStamped interface, write a partial class for each entity implementing the interface, and then end the namespace. The specifics of each method are:

<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, 
    extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

I think these three methods are fairly self-explanatory, other than the <# syntax that T4 uses to indicate code and text blocks.

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

Simply generates the interface definition.

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

Iterates through the entities.

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Writes an implementation of the IUserNameStamped interface for each entity.

So you can see it’s fairly simple to use T4 to generate C# code similar to that at the top of this blog post.

I’ve blogged about how I extended this code to make a certain set of entities with common properties implement a common interface

I’ve also blogged about further extending the template to handle multiple interfaces, with one file per interface.

This is the full code of the T4 template:

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"Entities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>
<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Automatically username stamping entities as they’re saved

The application I am currently working on has a requirement to audit which application user last created or updated database records. All tables in the database are required to have an nvarchar column UserName.

I didn’t want this concern to leak into my application. After some investigation I discovered that ObjectContext has the SavingChanges event that would be ideal for my purposes.

So the creation of my ObjectContext becomes

var entities = new MyEntities();
entities.SavingChanges += SetUserNameEvent;

I originally thought that SetUserNameEvent would have to use reflection to obtain and set the UserName property. However I found a way to use T4 to generate code resulting in all entities with the UserName property implementing a common interface (IUserNameStamped). I’ve written a blog post talking about about the T4 code.

So with all my entities implementing this common interface, SetUserNameEvent is then

/// <summary>
/// Sets the user name of all added and modified 
/// entities to the username provided by
/// the <see cref="UserNameProvider"/>. 
/// </summary>
private void SetUserNameEvent(object sender, EventArgs e)
{
    Contract.Requires<ArgumentException>(
        sender is ObjectContext, 
        "sender is not an instance of ObjectContext");
    var objectContext = (ObjectContext)sender;
    foreach (ObjectStateEntry entry in 
        objectContext.ObjectStateManager.GetObjectStateEntries(
            EntityState.Added | EntityState.Modified))
    {
        var stamped = entry.Entity as IUserNameStamped;
        Contract.Assert(stamped != null, 
            "Expected all entities implement IUserNameStamped");
        stamped.UserName = UserNameProvider.UserName;
    }
}

So here, we get all added and modified entries from the ObjectStateManager, and use these to obtain the entities and set their UserName. UserNameProvider is an abstraction used as I have several applications utilising my object context, each with a different way to obtain the current application user. Note that my code is using Code Contracts.

One complication I’ve found is with child entities. Sometimes, I’ve found I have to add the child entity both to its parent object and to the object context, but sometimes it’s enough to simply add the child entity to it’s parent object. That is:

var entities = ObjectContextFactory.GetObjectContext();
var childEntity = new ChildEntity();
entities.ParentEntities.First().ChildEntities.Add(childEntity);
// entities.ChildEntities.AddObject(childEntity);
entities.SaveChanges()
// Sometimes UserName will not get set without the commented line above, 
// resulting in a NOT NULL constraint violation

I’ve found no rhyme or reason as to why the addition to the ObjectContext is only sometimes required, I’d love hints as to why this is.

Note I’m actually using the unit of work pattern for my application, and I use a unit of work factory rather than an object context factory, but that’s irrelevant to the use of the SavingChanges event in this fashion.

Deploying database contents using Capistrano

I run a pair of Ruby on Rails sites, http://janeallnatt.co.nz and http://postmoderncore.com. I use Capistrano to deploy updates to both of these sites.

These sites are somewhat unusual for Rails sites, as I consider the database for these sites to be part of what I test locally and deploy. There is no data captured from the production sites. Once I built these sites and got Capistrano working, I realised that the database should be deployed as part of the Capistrano deploy.

I decided to simply dump the entire development database and restore it into my production database, tables and all. This subverts way Rails migrations are normally used, but if I get my migrations wrong, the development database is what I test against, so that’s the state I want my production tables in.

I use mysqldump to get a dump of the data.

mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}

And to load this dump into the production database

mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}

The only other thing I had to work out was how to get the locally dumped file onto my remote server – it proved to be pretty easy

	filename = "dump.#{Time.now.strftime '%Y%m%dT%H%M%S'}.sql"
	remote_path = "tmp/#{filename}"
	on_rollback { 
		delete remote_path
	}
	dumped_sql = `mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}`
	put dumped_sql, remote_path
	run "mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}"

I hooked this in after the deploy:finalize_update Capistrano event. Making for the following additions to my deploy.rb file, including configuration.

#TODO: Should come from database.yml
set :local_database, "postmoderncore"
set :local_database_user, "railsuser"
set :local_database_password, "railsuser"
set :remote_database, "p182r822_pmc"
set :remote_database_user, "p182r822_user"
set :remote_database_password, "T673eoTc4SWb"

after "deploy:finalize_update", :update_database

desc "Upload the database to the server"
task :update_database, :roles => :db, :only => { :primary => true } do
	filename = "dump.#{Time.now.strftime '%Y%m%dT%H%M%S'}.sql"
	remote_path = "tmp/#{filename}"
	on_rollback { 
		delete remote_path
	}
	dumped_sql = `mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}`
	put dumped_sql, remote_path
	run "mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}"
end

There is an issue where you have the new codebase pointing at the old database for a short period of time. For a high visibility site, I’d extend this approach to have multiple databases, so you load a different database for each version of the site. So when you upgrade the version of the site, it atomically switches from the old codebase pointing to the old database, to the new codebase pointing to the new database.

I’m sure that from this base, extension for more complex scenarios would be possible. For example, if you wanted some user generated content, you could restrict the database dump to only dump tables containing non-user generated data.

Extensible processing classes using reflection

I recently wanted to build an extensible set of processing classes. Each class can process certain objects it is provided.

I decided the simplest way to do this was to create an processor interface. The set of processing classes then is all classes that implement this interface. I use reflection to then find all the processors: that is, all implementations of the processor interface.

Assuming all processing is done on the object itself, the processor interface looks like this

public interface IProcessor
{
	bool CanProcess(ITarget target);

	void Process(ITarget target);
}

One of the processor implementations could look something like this

public class UpdateTotalProcessor : IProcessor
{
	public bool CanProcess(ITarget target)
	{
		return target.Items.Any();
	}

	public void Process(ITarget target)
	{
		target.Total = target.Items.Sum(item => item.Value);
	}
}

To utilise the processors, you’d end up with code similar to the following

private static readonly IEnumerable<IProcessor> Processors = InstancesOfMatchingTypes<IProcessor>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public void Process(ITarget target)
{
	foreach(IProcessor processor in Processors.Where(p => p.CanProcess(target)))
		processor.Process(target);
}

Note that with this implementation, multiple processors can potentially match, and therefore process, a target. Also note that there is no ordering; adding ordering would be an easy extension.

With this scheme, adding new processors is dead easy. Simply add a new implementation of IProcessor to the assembly, and it will be automatically picked up and used.

Providing classes that derive one type from another is also simple using this scheme.

public interface IDeriver
{
	bool CanDerive(ITarget target);

	IDerived Derive(ITarget target);
}

public class SampleDeriver : IDeriver
{
	public bool CanDerive(ITarget target)
	{
		return true;
	}

	public IDerived Derive(ITarget target)
	{
		return new Derived(target);
	}
}

Obviously applying more than one deriver makes no sense in this context

private static readonly IEnumerable<IDeriver> Derivers = InstancesOfMatchingTypes<IDeriver>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public IDerived Derive(ITarget target)
{
	IDeriver deriver = Derivers.FirstOrDefault(deriver => deriver.CanDerive(target));
	return deriver == null
		? null
		: deriver.Derive(target);
}

Once again ordering is undefined so if you have your derivers defined in such a way that more than one matches, you will end up with unpredictable results.

Text of my Summer of Tech Android talk

Here is the text of my Talk about Android at the Summer of Tech Mobile Developers Panel tonight.

I skipped the section telling businesses to think twice before deciding they needed a mobile application however, as there didn’t appear to be any business people in the audience. I was interested to find that during the panel discussion it was suggested to build mobile websites instead of mobile applications, and in discussion with individuals afterwards, the theme came up again. It’s clear that a number of developers in attendance agreed with me that mobile applications are looked at too readily, instead of options such as the mobile web or SMS.

Representing Android on SoT Mobile Panel, 8th Feb in Wellington

I’m going to be talking about Android for the Summer of Tech Mobile Developers Panel on the 8th of February. This is going to be my first public speaking engagement, and I’m really looking forward to it. Come and support, the discussion should be interesting and informative, and there’s pizza and beer! You need to RSVP on Lil’ Regie. All the Summer of Tech events I’ve attending have been great. Hope to see you there.

I’ll be busy preparing for the next week, a new blog post will be forthcoming sometime after the panel.