Scripting WordPress upgrades

WordPress has been nagging at me to upgrade. I’m getting more comfortable with shell scripting, and thought that I should script this upgrade process.

I put the script together basically mirroring the manual upgrade instructions given on the WordPress website. I’ll show you the full script, and then walk you through it.

set -e

#Because wget doesn't handle ~ in the prefix
upgradedir=`readlink -f ~/var/wp-upgrade`
upgradefile=$upgradedir/latest.tar.gz
rm -f $upgradefile
wget --directory-prefix=$upgradedir http://wordpress.org/latest.tar.gz
gunzip -t $upgradefile

cp ~/usr/wp-maintenance/.maintenance ~/blog-home/
echo * Site is now down for maintenance *

echo * Backing up files *
tar czf ~/var/backup/blog.$(date +%Y%m%d-%H%M%S).tgz ~/blog-home

echo * Backing up DB *
mysqldump --defaults-extra-file=~/usr/mysql-blog.cnf blog_database | \
  gzip -c > ~/var/backup/blog.$(date +%Y%m%d-%H%M%S).sql.gz

echo * Disabling plugins. Unless you roll back, you will need to enable the plugins manually at the end of this process *
mysql --defaults-extra-file=~/usr/mysql-blog.cnf -e \
  "UPDATE wp_options SET option_value = 'a:0:{}' WHERE option_name = 'active_plugins';" \
  blog_database

echo * The exciting bit - upgrading the installation *
rm -rf ~/blog-home/wp-includes
rm -rf ~/blog-home/wp-admin
tar -xzf $upgradefile -C ~/blog-home/ --strip=1 wordpress/

read -p "* Now re-enable the plugins, and test the site, then press ENTER to exit maintenance mode *"
rm ~/blog-home/.maintenance

echo * Upgrade complete *

The walkthrough:

set -e

Stop if any command returns an error. If anything unexpected happens, I want the script to stop, so I can manually assess and resolve the problem.

#Because wget doesn't handle ~ in the prefix
upgradedir=`readlink -f ~/var/wp-upgrade`
upgradefile=$upgradedir/latest.tar.gz
rm -f $upgradefile
wget --directory-prefix=$upgradedir http://wordpress.org/latest.tar.gz
gunzip -t $upgradefile

Remove any existing copy of the WordPress archive, download the latest version, and test it’s a valid gzip. I assume that if it’s a valid gzip, the TAR file inside will also be okay. The usage of readlink is because when I tried using wget --directory-prefix=~/var/wp-upgrade directly, wget created a directory called ~ inside the directory it was ran from.

cp ~/usr/wp-maintenance/.maintenance ~/blog-home/
echo * Site is now down for maintenance *

.maintenance is a PHP file that with appropriate code will cause a down for maintenance page to be shown when non-admin users visit the site. See the blog post series WordPress Maintenance Mode Without a Plugin for details. ~/usr/wp-maintenance/.maintenance contains the .maintenance code in the third part of the series. I also use a custom down for maintenance page as described in the second part of the series.

echo * Backing up files *
tar czf ~/var/backup/blog.$(date +%Y%m%d-%H%M%S).tgz ~/blog-home

echo * Backing up DB *
mysqldump --defaults-extra-file=~/usr/mysql-blog.cnf blog_database | \
  gzip -c > ~/var/backup/blog.$(date +%Y%m%d-%H%M%S).sql.gz

Create backups of the WordPress directory (~/blog-home) and the WordPress database. $(date +%Y%m%d-%H%M%S) results in the backup files having datetime stamped names.

I’m using a MySQL option file to avoid having to type the database password and also avoid including it in the mysqldump command line. ~/usr/mysql-blog.cnf looks like this.

[client]
host=mysql_host
user=wordpress_database_user
password=wordpress_database_password

[mysqldump]
add-drop-table

Nothing too exciting – the client setting applies to both MySQL commands, the add-drop-table config ensures that drop table statements are included in the MySQL backup.

echo * Disabling plugins. Unless you roll back, you will need to enable the plugins manually at the end of this process *
mysql --defaults-extra-file=~/usr/mysql-blog.cnf -e \
  "UPDATE wp_options SET option_value = 'a:0:{}' WHERE option_name = 'active_plugins';" \
  blog_database

This is taking the risk that WordPress could change the way plugin enabled state is stored in the database, but it’s convenient.

echo * The exciting bit - upgrading the installation *
rm -rf ~/blog-home/wp-includes
rm -rf ~/blog-home/wp-admin
tar -xzf $upgradefile -C ~/blog-home/ --strip=1 wordpress/

Remove the wp-includes and wp-admin directory entirely, and then unzip the new version of WordPress over the old version. The one piece that was tricky here was that the files in the TAR are all in the directory wordpress/, meaning that my first attempt resulted in the creation of ~/blog-home/wordpress. A bit of research found me the --strip parameter, which as used strips wordpress/ from the paths in the TAR.

read -p "* Now re-enable the plugins, and test the site, then press ENTER to exit maintenance mode *"
rm ~/blog-home/.maintenance

echo * Upgrade complete *

Once the operator has confirmed the site is ready for prime time, delete the .maintenance file and make the site available to users again. Job done.

Mocking ASP.NET providers

When playing around with ASP.NET membership, I found myself in a situation where I wanted to mock the ASP.NET Providers. This is something the design of providers makes non-trivial. Mark Seemann summarises: “Since a Provider creates instances of interfaces based on XML configuration and Activator.CreateInstance, there’s no way to inject a dynamic mock.”. See Provider is not a pattern.

I had a look around to see what others were doing. I found a post, Mocking membership provider, which proposes adding mocked providers to the provider collection dynamically. It seems like an elegant solution, but I couldn’t get it to work for me after a little playing.

In the end, I came up with a solution that is not the most elegant, but is very easy to use and to understand.

I create an implementation of each provider I want to. The provider contains a mock of that provider type. Each method and property of my provider implementation forwards to the mock the implementation provides. The mock is accessible via a static method of the provider implementation, so that test code can interact with it.

An example implementation:

public class TestRoleProvider : RoleProvider
{
	public static void ResetMock()
	{
		Mock = new Mock<RoleProvider>();
	}

	public static Mock<RoleProvider> Mock { get; private set; }

	#region RoleProvider implementation

	public override void AddUsersToRoles(string[] usernames, string[] roleNames)
	{
		Mock.Object.AddUsersToRoles(usernames, roleNames);
	}

	public override string ApplicationName
	{
		get { throw new NotImplementedException(); }
		set { throw new NotImplementedException(); }
	}

	// Other implementations omitted
}

Note the static methods controlling the mock at the top. Note also that I’ve simply implemented all methods and properties of RoleProvider as not implemented using Visual Studio tooling, and then updated the implementations to forward calls to my mock as I need.

Wiring up the provider framework to use this implementation is easy. Just add the following config to the app.config of your unit test project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
	<system.web>
		<roleManager defaultProvider="TestRoleProvider" enabled="true">
			<providers>
				<add name="TestRoleProvider"
					 type="TestProjectAssemblyName.TestRoleProvider, TestProjectAssemblyName" />
			</providers>
		</roleManager>
	</system.web>
</configuration>

Test code utilising this mock looks like the following:

[TestInitialize]
public void TestInitialize() 
{
	TestRoleProvider.ResetMock();
}
		
[TestMethod]
public void ReturnsNothingWhenNoUsersExist()
{
	var roles = new string[] { };
	TestRoleProvider.Mock.
		Setup(m => m.GetAllRoles()).
		Returns(roles);

	var result = new GetAllRolesQuery().Execute();

	Assert.IsTrue(!result.Any());
}

Mass-assignment – don’t control access at the model level

I followed with some interest the debate around the “mass assignment vulnerability” recently reported in Rails. I dislike the way the whole debate is couched assuming access control at the model level. When you are stating that an attribute cannot be assigned, you are stating that to assign this attribute from the controller, you can’t use the normal update_attributes method, and must update it explicitly.

I don’t like the laziness this implies. I believe we should embrace practices that encourage explicit thought about the attributes a controller method can update – encouraging developers to build in an intentional fashion, and consider exactly how each method they build behaves. The role based security in Rails 3.1 is alright. But I still think the definition of which attributes a controller method updates should be made explicit. It isn’t just a security concern. The attributes a controller method can take as input to update a model should be part of the contract of that method, in my opinion. It aids understanding.

I far prefer the ideas of the View Model and Model In Model Out. The input a controller can receive, and the output that is rendered to the view is both explicitly modeled (and therefore documented), and independent of the underlying model. Using a tool such as Automapper can prevent the noise mapping View Model to Model can introduce.

I don’t know the Rails community well enough to know what people are doing in this space. I saw a stackoverflow post that refers to a Presenter patter, but can’t say I’ve looked too closely yet.

WordPress.com don’t get the Principle of least privilege

I just went to post a comment on a WordPress.com hosted blog. I was going to use my Twitter account to log in, and in order to authenticate with my Twitter I was asked to allow WordPress to:

  • Read Tweets from your timeline.
  • See who you follow, and follow new people.
  • Update your profile.
  • Post Tweets for you.

This appears to be because the same authentication is used when you are a blog author and wish to allow WordPress to “Tweet your WordPress.com posts.”.

Come on WordPress. Surely you’ve got the resources and savvy to provide different levels of authentication for bloggers and for commenters? As a commenter identity is the only issue, and the authentication process should ask for no rights whatsoever, beyond being able to read my email address and name. Cf the Principle of least privilege. Lame.

Generating multiple files from one T4 template

In the previous posts about T4 I firstly drove T4 generation from EF entity definitions, then used this to make EF POCO classes with certain properties implement an interface. Please read these posts before reading this one – in particular the code in this post refers to code from the previous one.

In this post, I’ll extend what I’ve already built to handle multiple interfaces, and to generate a single file per interface.

For this example, I’m going to use two interfaces.

public interface IUserNameStamped
{
    string UserName { get; set; }
}

public interface ILookup
{
    string Code { get; set; }

    string Description { get; set; }
}

I want my EF POCOs to implement IUserNameStamped if they have a UserName property, and ILookup if they have a Code and Description property. I want the IUserNameStamped code in a file IUserNameStamped.cs, and ILookup code in a file ILookup.cs.

By default, a T4 template will generate a single file with the same name as the the template, and the extension defined by the <#@ output #> directive. The EntityFrameworkTemplateFileManager, used by EF to generate a file per entity, is the secret to generating multiple files from a single template.

The other change needed to the T4 code we already have is to break it into reusable methods that can be shared for each entity.

The method I’ve defined to generate an file for a given interface is CreateInterfaceFile, shown here with support classes.

<#+
void CreateInterfaceFile(EntityFrameworkTemplateFileManager fileManager,  
	CodeGenerationTools code,
	EdmItemCollection itemCollection,
	string namespaceName, 
	Action interfaceWriter, 
	string interfaceName, 
	params string[] requiredProperties)
{
    fileManager.StartNewFile(interfaceName + ".cs");
	BeginNamespace(namespaceName, code);
	interfaceWriter();
	var entities = GetEntitiesWithPropertyOrRelationship(itemCollection,
		requiredProperties);
	foreach (EntityType entity in entities.OrderBy(e => e.Name))
	{
		WriteInterfaceImplementation(entity.Name, interfaceName);
	}
	EndNamespace(namespaceName);
}
#>
<#+
void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
	EdmItemCollection itemCollection, 
	params string[] requiredProperties)
{
	return itemCollection.GetItems<EntityType>().Where(entity => 
		EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship
	(EntityType entity, params string[] requiredProperties)
{
	return requiredProperties.All(
		requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
		|| entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

void WriteInterfaceImplementation(string entityName, string interfaceName)
{
#>

public partial class <#=entityName#> : <#=interfaceName#>
{
}
<#+
}

The parameters of CreateInterfaceFile:

  • The first three parameters are T4 and EF classes instantiated at the top of the template and passed in.
  • namespaceName is also provided by T4 – the namespace the interface and classes will belong to.
  • interfaceWriter is a action that writes out the definition of the interface itself.
  • interfaceName is the name of the interface.
  • requiredProperties is an array of all the properties a class must have to be considered to implement the interface.

The logic is very simple

  • The EntityFrameworkTemplateFileManager is used to start a file for the interface – all output now goes to this file until the next time StartNewFile is called.
  • The namespace is written.
  • The declaration of the interface is written.
  • Entities matching this interface are found using GetEntitiesWithPropertyOrRelationship (as explained in the previous blog post.
  • An partial implementation of the class for each matching entity is written, with no content, simply stating that the class implements the interface in question.
  • The namespace is closed.

That’s about all there is to it. Once again, an extension to this code to match entity properties by type as well as name is left as an exercise to the reader.

Here is full source code:

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#><#@
 output extension=".cs"#><#

string inputFile = @"OticrsEntities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
	CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

EntityFrameworkTemplateFileManager fileManager = 
	EntityFrameworkTemplateFileManager.Create(this);
WriteHeader(fileManager);

#>
// Default file generated by T4. Generation cannot be prevented. Please ignore.
<#

CreateInterfaceFile(fileManager, 
	code,
	itemCollection, 
	namespaceName,
	WriteILookupInterface,
	"ILookup",
	"ContractorCode", "Description");

CreateInterfaceFile(fileManager, 
	code,
	itemCollection, 
	namespaceName,
	WriteIUserNameStampedInterface,
	"IUserNameStamped",
	"UserName");
	
fileManager.Process(true);

#>
<#+
void CreateInterfaceFile(EntityFrameworkTemplateFileManager fileManager,  
	CodeGenerationTools code,
	EdmItemCollection itemCollection,
	string namespaceName, 
	Action interfaceWriter, 
	string interfaceName, 
	params string[] requiredProperties)
{
    fileManager.StartNewFile(interfaceName + ".cs");
	BeginNamespace(namespaceName, code);
	interfaceWriter();
	var entities = GetEntitiesWithPropertyOrRelationship(itemCollection, 
		requiredProperties);
	foreach (EntityType entity in entities.OrderBy(e => e.Name))
	{
		WriteInterfaceImplementation(entity.Name, interfaceName);
	}
	EndNamespace(namespaceName);
}
#>
<#+
void WriteHeader(EntityFrameworkTemplateFileManager fileManager, 
	params string[] extraUsings)
{
    fileManager.StartHeader();
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

using System.Collections.Generic;

<#=String.Join(String.Empty, extraUsings.
		Select(u => "using " + u + ";" + Environment.NewLine).
		ToArray())#>
<#+
    fileManager.EndBlock();
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
	EdmItemCollection itemCollection, 
	params string[] requiredProperties)
{
	return itemCollection.GetItems<EntityType>().Where(entity => 
		EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(EntityType entity, 
	params string[] requiredProperties)
{
	return requiredProperties.All(requiredProperty => 
		entity.Properties.Any(property => property.Name == requiredProperty)
		|| entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

void WriteInterfaceImplementation(string entityName, string interfaceName)
{
#>

public partial class <#=entityName#> : <#=interfaceName#>
{
}
<#+
}

void WriteILookupInterface()
{
#>
/// <summary>
/// A lookup entity, that can be looked up by a ContractorCode
/// </summary>
public interface ILookup
{
    string ContractorCode { get; set; }
	
	string Description { get; set; }
}
<#+
}

void WriteIUserNameStampedInterface()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}
<#+
}
#>

Duck typing Entity Framework classes using T4 Templates

Duck typing is an interesting concept, and alien to C# generally. But using the techniques of my previous post about T4 and Entity Framework, it is possible to have your entities implement interfaces if they have the required properties, resulting in behaviour similar to duck typing. Please read the previous blog post before reading this one.

The previous blog post gives us code to implement interfaces for each entity in an object model. In order to provide “duck typing”, we will extend this to only implement the interface for an entity if that entity has the properties of the interface.

Fortunately System.Data.Metadata.Edm.EntityType gives us the ability to inspect the properties of an entity. For my purposes, I only check for properties by name, as I control my database and would never have the same column name with two different data types. Extension of this code to check property types as well as names is left as an exercise for the reader.

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
    EdmItemCollection itemCollection, params string[] requiredProperties)
{
    return itemCollection.GetItems<EntityType>().
        Where(entity => EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(
    EntityType entity, params string[] requiredProperties)
{
    return requiredProperties.All(
        requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
        || entity.NavigationProperties.Any(property => property.Name == requiredProperty));
}

Pretty simple stuff. EntityHasPropertyOrRelationship checks both the Properties (properties relating to simply database columns), and NavigationProperties (properties relating to foreign key relationships) for properties with the required names. If our entity has all the required properties, it’s a match.

GetEntitiesWithPropertyOrRelationship uses EntityHasPropertyOrRelationship to retrieve all the entities that have the required properies from our itemCollection.

I’ve blogged about further extending the template to handle multiple interfaces, with one file per interface.

Here’s the full code of the example from the last blog post, updated so entities only implement IUserNameStamped if they actually have a column called UserName.

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"Entities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
    CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>
<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, extraUsings.
    Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
/// <remarks>
/// All OTICRS entities should have a username. If any entity fails to implement
/// this interface, it means the table needs the UserName column added to it.
/// </remarks>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

void WriteEntitiesWithInterface(
    EdmItemCollection itemCollection)
{
    foreach (EntityType entity in 
        GetEntitiesWithPropertyOrRelationship(itemCollection, "UserName").
        OrderBy(e => e.Name))
    {
        WriteEntityWithInterface(entity.Name);
    }
}

IEnumerable<EntityType> GetEntitiesWithPropertyOrRelationship(
    EdmItemCollection itemCollection, params string[] requiredProperties)
{
    return itemCollection.GetItems<EntityType>().Where(
        entity => EntityHasPropertyOrRelationship(entity, requiredProperties));
}

bool EntityHasPropertyOrRelationship(
    EntityType entity, params string[] requiredProperties)
{
    return requiredProperties.All(
        requiredProperty => entity.Properties.Any(property => property.Name == requiredProperty)
        || entity.NavigationProperties.All(property => property.Name == requiredProperty));
}

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Using T4 Templates to extend your Entity Framework classes

A set of entities I’m using with Entity Framework (I’m using EF POCO) have common properties implying commonality between the entities. I didn’t want to use any form of inheritance within my object model to express this commonality, but I did wish to have the entity classes implement common interfaces. It’s easy to do this because entities are partial classes. Say for example all my entities have a string property UserName, I can define an interface to express this, and then have a partial implemention of the class that implements the interface.

public interface IUserNameStamped
{
    string UserName { get; set; }
}
    
public partial class Entity1 : IUserNameStamped
{
}
    
public partial class Entity2 : IUserNameStamped
{
}

So the POCO T4 template generates the “main” class definition for each entity, with all it’s properties, and then these partial classes extend the class, not adding any new properties or methods, just extending with the fact each class implements the IUserNameStamped interface.

I quickly realised that I could use T4 in a similar manner to the EF POCO T4 template, in order to produce these partial classes automatically.

As I explained in my post about UserName stamping entities as they’re saved, all my entities have a UserName column. So all this template has to do is loop through all the entities in my object model, and write an implementation for each.

The main T4 logic is

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"OticrsEntities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).
    CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>

Most of this is cribbed unashamedly from the EF POCO T4 template. Firstly we initialise some variables, the most interesting being itemCollection, which is what allows access to the entity metadata. We then write a header indicate the file is a generated file, start the namespace, write the actual declaration of the IUsernameStamped interface, write a partial class for each entity implementing the interface, and then end the namespace. The specifics of each method are:

<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, 
    extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

I think these three methods are fairly self-explanatory, other than the <# syntax that T4 uses to indicate code and text blocks.

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

Simply generates the interface definition.

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

Iterates through the entities.

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Writes an implementation of the IUserNameStamped interface for each entity.

So you can see it’s fairly simple to use T4 to generate C# code similar to that at the top of this blog post.

I’ve blogged about how I extended this code to make a certain set of entities with common properties implement a common interface

I’ve also blogged about further extending the template to handle multiple interfaces, with one file per interface.

This is the full code of the T4 template:

<#@ template language="C#" debug="false" hostspecific="true"#>
<#@ include file="EF.Utility.CS.ttinclude"#>
<#@ output extension=".cs"#>
<#
string inputFile = @"Entities.edmx";
EdmItemCollection itemCollection = new MetadataLoader(this).CreateEdmItemCollection(inputFile);

CodeGenerationTools code = new CodeGenerationTools(this);
string namespaceName = code.VsNamespaceSuggestion();

WriteHeader();
BeginNamespace(namespaceName, code);
WriteIUserNameStamped();
WriteEntitiesWithInterface(itemCollection);
EndNamespace(namespaceName);
#>
<#+
void WriteHeader(params string[] extraUsings)
{
#>
//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated from a template.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

<#=String.Join(String.Empty, extraUsings.Select(u => "using " + u + ";" + Environment.NewLine).ToArray())#>
<#+
}

void BeginNamespace(string namespaceName, CodeGenerationTools code)
{
    CodeRegion region = new CodeRegion(this);
    if (!String.IsNullOrEmpty(namespaceName))
    {
#>
namespace <#=code.EscapeNamespace(namespaceName)#>
{
<#+
        PushIndent(CodeRegion.GetIndent(1));
    }
}


void EndNamespace(string namespaceName)
{
    if (!String.IsNullOrEmpty(namespaceName))
    {
        PopIndent();
#>
}
<#+
    }
}

void WriteIUserNameStamped()
{
#>
/// <summary>
/// An entity that is stamped with the Username that created it
/// </summary>
public interface IUserNameStamped
{
    string UserName { get; set; }
}

<#+
}

void WriteEntitiesWithInterface(EdmItemCollection itemCollection)
{
	foreach (EntityType entity in itemCollection.GetItems<EntityType>().OrderBy(e => e.Name))
	{
		WriteEntityWithInterface(entity.Name);
	}
}

void WriteEntityWithInterface(string entityName)
{
#>
public partial class <#=entityName#> : IUserNameStamped
{
}

<#+
}

#>

Automatically username stamping entities as they’re saved

The application I am currently working on has a requirement to audit which application user last created or updated database records. All tables in the database are required to have an nvarchar column UserName.

I didn’t want this concern to leak into my application. After some investigation I discovered that ObjectContext has the SavingChanges event that would be ideal for my purposes.

So the creation of my ObjectContext becomes

var entities = new MyEntities();
entities.SavingChanges += SetUserNameEvent;

I originally thought that SetUserNameEvent would have to use reflection to obtain and set the UserName property. However I found a way to use T4 to generate code resulting in all entities with the UserName property implementing a common interface (IUserNameStamped). I’ve written a blog post talking about about the T4 code.

So with all my entities implementing this common interface, SetUserNameEvent is then

/// <summary>
/// Sets the user name of all added and modified 
/// entities to the username provided by
/// the <see cref="UserNameProvider"/>. 
/// </summary>
private void SetUserNameEvent(object sender, EventArgs e)
{
    Contract.Requires<ArgumentException>(
        sender is ObjectContext, 
        "sender is not an instance of ObjectContext");
    var objectContext = (ObjectContext)sender;
    foreach (ObjectStateEntry entry in 
        objectContext.ObjectStateManager.GetObjectStateEntries(
            EntityState.Added | EntityState.Modified))
    {
        var stamped = entry.Entity as IUserNameStamped;
        Contract.Assert(stamped != null, 
            "Expected all entities implement IUserNameStamped");
        stamped.UserName = UserNameProvider.UserName;
    }
}

So here, we get all added and modified entries from the ObjectStateManager, and use these to obtain the entities and set their UserName. UserNameProvider is an abstraction used as I have several applications utilising my object context, each with a different way to obtain the current application user. Note that my code is using Code Contracts.

One complication I’ve found is with child entities. Sometimes, I’ve found I have to add the child entity both to its parent object and to the object context, but sometimes it’s enough to simply add the child entity to it’s parent object. That is:

var entities = ObjectContextFactory.GetObjectContext();
var childEntity = new ChildEntity();
entities.ParentEntities.First().ChildEntities.Add(childEntity);
// entities.ChildEntities.AddObject(childEntity);
entities.SaveChanges()
// Sometimes UserName will not get set without the commented line above, 
// resulting in a NOT NULL constraint violation

I’ve found no rhyme or reason as to why the addition to the ObjectContext is only sometimes required, I’d love hints as to why this is.

Note I’m actually using the unit of work pattern for my application, and I use a unit of work factory rather than an object context factory, but that’s irrelevant to the use of the SavingChanges event in this fashion.

Deploying database contents using Capistrano

I run a pair of Ruby on Rails sites, http://janeallnatt.co.nz and http://postmoderncore.com. I use Capistrano to deploy updates to both of these sites.

These sites are somewhat unusual for Rails sites, as I consider the database for these sites to be part of what I test locally and deploy. There is no data captured from the production sites. Once I built these sites and got Capistrano working, I realised that the database should be deployed as part of the Capistrano deploy.

I decided to simply dump the entire development database and restore it into my production database, tables and all. This subverts way Rails migrations are normally used, but if I get my migrations wrong, the development database is what I test against, so that’s the state I want my production tables in.

I use mysqldump to get a dump of the data.

mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}

And to load this dump into the production database

mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}

The only other thing I had to work out was how to get the locally dumped file onto my remote server – it proved to be pretty easy

	filename = "dump.#{Time.now.strftime '%Y%m%dT%H%M%S'}.sql"
	remote_path = "tmp/#{filename}"
	on_rollback { 
		delete remote_path
	}
	dumped_sql = `mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}`
	put dumped_sql, remote_path
	run "mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}"

I hooked this in after the deploy:finalize_update Capistrano event. Making for the following additions to my deploy.rb file, including configuration.

#TODO: Should come from database.yml
set :local_database, "postmoderncore"
set :local_database_user, "railsuser"
set :local_database_password, "railsuser"
set :remote_database, "p182r822_pmc"
set :remote_database_user, "p182r822_user"
set :remote_database_password, "T673eoTc4SWb"

after "deploy:finalize_update", :update_database

desc "Upload the database to the server"
task :update_database, :roles => :db, :only => { :primary => true } do
	filename = "dump.#{Time.now.strftime '%Y%m%dT%H%M%S'}.sql"
	remote_path = "tmp/#{filename}"
	on_rollback { 
		delete remote_path
	}
	dumped_sql = `mysqldump --user=#{local_database_user} --password=#{local_database_password} #{local_database}`
	put dumped_sql, remote_path
	run "mysql --user=#{remote_database_user} --password=#{remote_database_password} #{remote_database} < #{remote_path}"
end

There is an issue where you have the new codebase pointing at the old database for a short period of time. For a high visibility site, I’d extend this approach to have multiple databases, so you load a different database for each version of the site. So when you upgrade the version of the site, it atomically switches from the old codebase pointing to the old database, to the new codebase pointing to the new database.

I’m sure that from this base, extension for more complex scenarios would be possible. For example, if you wanted some user generated content, you could restrict the database dump to only dump tables containing non-user generated data.