Mocking ASP.NET providers

When playing around with ASP.NET membership, I found myself in a situation where I wanted to mock the ASP.NET Providers. This is something the design of providers makes non-trivial. Mark Seemann summarises: “Since a Provider creates instances of interfaces based on XML configuration and Activator.CreateInstance, there’s no way to inject a dynamic mock.”. See Provider is not a pattern.

I had a look around to see what others were doing. I found a post, Mocking membership provider, which proposes adding mocked providers to the provider collection dynamically. It seems like an elegant solution, but I couldn’t get it to work for me after a little playing.

In the end, I came up with a solution that is not the most elegant, but is very easy to use and to understand.

I create an implementation of each provider I want to. The provider contains a mock of that provider type. Each method and property of my provider implementation forwards to the mock the implementation provides. The mock is accessible via a static method of the provider implementation, so that test code can interact with it.

An example implementation:

public class TestRoleProvider : RoleProvider
{
	public static void ResetMock()
	{
		Mock = new Mock<RoleProvider>();
	}

	public static Mock<RoleProvider> Mock { get; private set; }

	#region RoleProvider implementation

	public override void AddUsersToRoles(string[] usernames, string[] roleNames)
	{
		Mock.Object.AddUsersToRoles(usernames, roleNames);
	}

	public override string ApplicationName
	{
		get { throw new NotImplementedException(); }
		set { throw new NotImplementedException(); }
	}

	// Other implementations omitted
}

Note the static methods controlling the mock at the top. Note also that I’ve simply implemented all methods and properties of RoleProvider as not implemented using Visual Studio tooling, and then updated the implementations to forward calls to my mock as I need.

Wiring up the provider framework to use this implementation is easy. Just add the following config to the app.config of your unit test project:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
	<system.web>
		<roleManager defaultProvider="TestRoleProvider" enabled="true">
			<providers>
				<add name="TestRoleProvider"
					 type="TestProjectAssemblyName.TestRoleProvider, TestProjectAssemblyName" />
			</providers>
		</roleManager>
	</system.web>
</configuration>

Test code utilising this mock looks like the following:

[TestInitialize]
public void TestInitialize() 
{
	TestRoleProvider.ResetMock();
}
		
[TestMethod]
public void ReturnsNothingWhenNoUsersExist()
{
	var roles = new string[] { };
	TestRoleProvider.Mock.
		Setup(m => m.GetAllRoles()).
		Returns(roles);

	var result = new GetAllRolesQuery().Execute();

	Assert.IsTrue(!result.Any());
}

Automatically username stamping entities as they’re saved

The application I am currently working on has a requirement to audit which application user last created or updated database records. All tables in the database are required to have an nvarchar column UserName.

I didn’t want this concern to leak into my application. After some investigation I discovered that ObjectContext has the SavingChanges event that would be ideal for my purposes.

So the creation of my ObjectContext becomes

var entities = new MyEntities();
entities.SavingChanges += SetUserNameEvent;

I originally thought that SetUserNameEvent would have to use reflection to obtain and set the UserName property. However I found a way to use T4 to generate code resulting in all entities with the UserName property implementing a common interface (IUserNameStamped). I’ve written a blog post talking about about the T4 code.

So with all my entities implementing this common interface, SetUserNameEvent is then

/// <summary>
/// Sets the user name of all added and modified 
/// entities to the username provided by
/// the <see cref="UserNameProvider"/>. 
/// </summary>
private void SetUserNameEvent(object sender, EventArgs e)
{
    Contract.Requires<ArgumentException>(
        sender is ObjectContext, 
        "sender is not an instance of ObjectContext");
    var objectContext = (ObjectContext)sender;
    foreach (ObjectStateEntry entry in 
        objectContext.ObjectStateManager.GetObjectStateEntries(
            EntityState.Added | EntityState.Modified))
    {
        var stamped = entry.Entity as IUserNameStamped;
        Contract.Assert(stamped != null, 
            "Expected all entities implement IUserNameStamped");
        stamped.UserName = UserNameProvider.UserName;
    }
}

So here, we get all added and modified entries from the ObjectStateManager, and use these to obtain the entities and set their UserName. UserNameProvider is an abstraction used as I have several applications utilising my object context, each with a different way to obtain the current application user. Note that my code is using Code Contracts.

One complication I’ve found is with child entities. Sometimes, I’ve found I have to add the child entity both to its parent object and to the object context, but sometimes it’s enough to simply add the child entity to it’s parent object. That is:

var entities = ObjectContextFactory.GetObjectContext();
var childEntity = new ChildEntity();
entities.ParentEntities.First().ChildEntities.Add(childEntity);
// entities.ChildEntities.AddObject(childEntity);
entities.SaveChanges()
// Sometimes UserName will not get set without the commented line above, 
// resulting in a NOT NULL constraint violation

I’ve found no rhyme or reason as to why the addition to the ObjectContext is only sometimes required, I’d love hints as to why this is.

Note I’m actually using the unit of work pattern for my application, and I use a unit of work factory rather than an object context factory, but that’s irrelevant to the use of the SavingChanges event in this fashion.

Extensible processing classes using reflection

I recently wanted to build an extensible set of processing classes. Each class can process certain objects it is provided.

I decided the simplest way to do this was to create an processor interface. The set of processing classes then is all classes that implement this interface. I use reflection to then find all the processors: that is, all implementations of the processor interface.

Assuming all processing is done on the object itself, the processor interface looks like this

public interface IProcessor
{
	bool CanProcess(ITarget target);

	void Process(ITarget target);
}

One of the processor implementations could look something like this

public class UpdateTotalProcessor : IProcessor
{
	public bool CanProcess(ITarget target)
	{
		return target.Items.Any();
	}

	public void Process(ITarget target)
	{
		target.Total = target.Items.Sum(item => item.Value);
	}
}

To utilise the processors, you’d end up with code similar to the following

private static readonly IEnumerable<IProcessor> Processors = InstancesOfMatchingTypes<IProcessor>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public void Process(ITarget target)
{
	foreach(IProcessor processor in Processors.Where(p => p.CanProcess(target)))
		processor.Process(target);
}

Note that with this implementation, multiple processors can potentially match, and therefore process, a target. Also note that there is no ordering; adding ordering would be an easy extension.

With this scheme, adding new processors is dead easy. Simply add a new implementation of IProcessor to the assembly, and it will be automatically picked up and used.

Providing classes that derive one type from another is also simple using this scheme.

public interface IDeriver
{
	bool CanDerive(ITarget target);

	IDerived Derive(ITarget target);
}

public class SampleDeriver : IDeriver
{
	public bool CanDerive(ITarget target)
	{
		return true;
	}

	public IDerived Derive(ITarget target)
	{
		return new Derived(target);
	}
}

Obviously applying more than one deriver makes no sense in this context

private static readonly IEnumerable<IDeriver> Derivers = InstancesOfMatchingTypes<IDeriver>();
		
private static IEnumerable<T> InstancesOfMatchingTypes<T>()
{
	Assembly assembly = Assembly.GetExecutingAssembly();
	return TypeInstantiator.Instance.InstancesOfMatchingTypes<T>(assembly);
}

public IDerived Derive(ITarget target)
{
	IDeriver deriver = Derivers.FirstOrDefault(deriver => deriver.CanDerive(target));
	return deriver == null
		? null
		: deriver.Derive(target);
}

Once again ordering is undefined so if you have your derivers defined in such a way that more than one matches, you will end up with unpredictable results.

Updating the registry using .NET and LogParser

I have discovered a need to be able to search and replace registry values. I originally thought about using Powershell but after reading this blog post about Powershell performance with the registry, I decided to use .NET. I quickly encountered the idea of using LogParser to read the registry at high speed, and decided this was a fruitful avenue.

The background to this need is that when you use a custom profile location, you can only use Chrome as your default browser by editing the registry. I did this manually once. Then when I found the keys had reset themselves, I decided coding something to update the registry for me would be interesting.

The first stage was to get the LogParser COM interop built. This was pleasantly easy. As simple as running tlbimp "C:\Program Files (x86)\Log Parser 2.2\LogParser.dll" /out:Interop.MSUtil.dll, adding the DLL as a reference to my project, adding using statements to Program.cs, and then writing some code. I started by getting the search going.

using System;
using System.Linq;
using System.Runtime.InteropServices;
using LogQuery = Interop.MSUtil.LogQueryClass;
using RegistryInputFormat = Interop.MSUtil.COMRegistryInputContextClass;
using RegRecordSet = Interop.MSUtil.ILogRecordset;
using System.Diagnostics;
using Microsoft.Win32;
using System.Collections.Generic;

namespace FreeSpiritSoftware.ChromeRegistryCustomProfile
{
	public class Program
	{
		private const string DefaultChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" -- ""%1""";

		public static void Main(string[] args)
		{
			RegRecordSet rs = null;

			Stopwatch stopWatch = new Stopwatch();
			stopWatch.Start(); 
			try
			{
				LogQuery qry = new LogQuery();
				RegistryInputFormat registryFormat = new RegistryInputFormat();
				string query = string.Format(@"SELECT Path, ValueName from \HKCR, \HKCU, \HKLM, \HKCC, \HKU where Value = '{0}'", DefaultChromeCall);
				rs = qry.Execute(query, registryFormat);
				for (; !rs.atEnd(); rs.moveNext())
				{
					string path = rs.getRecord().toNativeString(0);
					string valueName = rs.getRecord().toNativeString(1);
					Console.WriteLine(path);
					Console.WriteLine(valueName);
					Console.WriteLine("--");
				}
			}
			finally
			{
				rs.close();
			}
			stopWatch.Stop();
			Console.WriteLine(stopWatch.Elapsed.TotalSeconds + " seconds");
			Console.ReadKey(false);
		}
	}
}

You’ll see I explicitly reference the five registry keys in the FROM statement of the query I give LogParser, even though I’m searching the whole registry. This is because when I tried FROM /, I got two results per root key of the registry, one using the abbreviated root key name, one using it’s full name (e.g. I’d get HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command and HKCR\ChromeHTML\shell\open\command).

So once I had the code above working, the next step was to actually access and update the keys using Microsoft.Win32.Registry. This proved to be more complex than I had expected as (a) you have to access the root keys as static properties of Registry, and (b) from a particular key, you can only access its immediate subkeys. I’m sure there are libraries that make matters simpler, but working around was easy enough.

To deal with the root keys, I created a dictionary to use to look up root key abbreviations from LogParser, and return the root key objects. I created a recursive function to move through subkeys to finally access the subkey referenced by a path.

private static readonly IDictionary<string, RegistryKey> RegistryLookup = new Dictionary<string, RegistryKey>
{
	{ "HKCR", Registry.ClassesRoot },
	{ "HKCU", Registry.CurrentUser },
	{ "HKLM", Registry.LocalMachine },
	{ "HKCC", Registry.CurrentConfig },
	{ "HKU", Registry.Users },
};

private static RegistryKey GetSubKey(IEnumerable<string> splitPath)
{
	RegistryKey rootKey = RegistryLookup[splitPath.First()];
	return GetSubKey(rootKey, splitPath.Skip(1));
}

private static RegistryKey GetSubKey(RegistryKey key, IEnumerable<string> splitPath)
{
	var theRest = splitPath.Skip(1);
	return theRest.Any()
		? GetSubKey(key.OpenSubKey(splitPath.First()), splitPath.Skip(1))
		: key.OpenSubKey(splitPath.First(), writable: true);
}

So for HKCR\ChromeHTML\shell\open\command and HKCR\ChromeHTML\shell\open\command, it’ll split off HKCR and get the root key, call GetSubKey(Registry.ClassesRoot, { "ChromeHTML", "shell", "open", "command" }) which will get the ChromeHTML subkey within HKCR, and call GetSubKey(ChromeHTML, { "shell", "open", "command" }), and so on, until it calls with GetSubKey(open, { "command" }), and which point recursion ends, and the “command” key is opened writable and returned.

From this point things were easy. The only other complication was that LogParser represents the default key as "(Default)", whereas Microsoft.Win32.Registry represents it as string.Empty.

The final code looks like this. Parameterisation, tidying, etc is left as an exercise for the reader.

using System;
using System.Linq;
using System.Runtime.InteropServices;
using LogQuery = Interop.MSUtil.LogQueryClass;
using RegistryInputFormat = Interop.MSUtil.COMRegistryInputContextClass;
using RegRecordSet = Interop.MSUtil.ILogRecordset;
using System.Diagnostics;
using Microsoft.Win32;
using System.Collections.Generic;

namespace FreeSpiritSoftware.ChromeRegistryCustomProfile
{
	public class Program
	{
		private const string DefaultChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" -- ""%1""";
		private const string ReplacementChromeCall = @"""C:\Users\sam\AppData\Local\Google\Chrome\Application\chrome.exe"" --user-data-dir=""E:\settings\chrome-profiles""  -- ""%1""";
		private static readonly char[] PathSeparator = new[] { '\\' };
		private static readonly IDictionary<string, RegistryKey> RegistryLookup = new Dictionary<string, RegistryKey>
        {
            { "HKCR", Registry.ClassesRoot },
            { "HKCU", Registry.CurrentUser },
            { "HKLM", Registry.LocalMachine },
            { "HKCC", Registry.CurrentConfig },
            { "HKU", Registry.Users },
        };

		public static void Main(string[] args)
		{
			RegRecordSet rs = null;

			Stopwatch stopWatch = new Stopwatch();
			stopWatch.Start();
			try
			{
				LogQuery qry = new LogQuery();
				RegistryInputFormat registryFormat = new RegistryInputFormat();
				string query = string.Format(@"SELECT Path, ValueName from \HKCR, \HKCU, \HKLM, \HKCC, \HKU where Value = '{0}'", DefaultChromeCall);
				rs = qry.Execute(query, registryFormat);
				for (; !rs.atEnd(); rs.moveNext())
				{
					string path = rs.getRecord().toNativeString(0);
					string valueName = rs.getRecord().toNativeString(1);
					if (valueName == "(Default)")
						valueName = string.Empty;
					Console.WriteLine(path);
					Console.WriteLine(valueName);
					String[] splitPath = path.Split(PathSeparator);
					RegistryKey key = GetSubKey(splitPath.Take(splitPath.Length));
					Console.WriteLine(key.GetValue(valueName));
					key.SetValue(valueName, ReplacementChromeCall);
					Console.WriteLine(key.GetValue(valueName));
					Console.WriteLine("--");
				}
			}
			finally
			{
				rs.close();
			}
			stopWatch.Stop();
			Console.WriteLine(stopWatch.Elapsed.TotalSeconds + " seconds");
			Console.ReadKey(false);
		}

		private static RegistryKey GetSubKey(IEnumerable<string> splitPath)
		{
			RegistryKey rootKey = RegistryLookup[splitPath.First()];
			return GetSubKey(rootKey, splitPath.Skip(1));
		}

		private static RegistryKey GetSubKey(RegistryKey key, IEnumerable<string> splitPath)
		{
			var theRest = splitPath.Skip(1);
			return theRest.Any()
				? GetSubKey(key.OpenSubKey(splitPath.First()), splitPath.Skip(1))
				: key.OpenSubKey(splitPath.First(), writable: true);
		}
	}
}

(Note: I’m aware that there are aliases within the registry so I’m performing duplicate searches, I was happy to just use a brute-force search).

Tidy IEqualityComparer with GenericEqualityComparer

Whilst looking through a codebase, I saw implementations of IEqualityComparer<>. After thinking to myself that the need to create an entire implementation of IEqualityComparer<> per use creates quite a bit of boilerplate for such a small amount of signal, I realised that creating a generic implementation of IEqualityComparer<> that takes a definition of equality in its constructor would be very simple.

public class GenericEqualityComparer<T> : IEqualityComparer<T>
{
	private readonly Func<T, T, bool> mEqualsFunc;
	private readonly Func<T, int> mGetHashCodeFunc;

	public GenericEqualityComparer(Func<T, T, bool> equalsFunc, 
		Func<T, int> getHashCodeFunc)
	{
		if (equalsFunc == null)
			throw new ArgumentNullException("equalsFunc");
		if (getHashCodeFunc == null)
			throw new ArgumentNullException("getHashCodeFunc");

		mEqualsFunc = equalsFunc;
		mGetHashCodeFunc = getHashCodeFunc;
	}

	public bool Equals(T x, T y)
	{
		return mEqualsFunc(x, y);
	}

	public int GetHashCode(T obj)
	{
		return mGetHashCodeFunc(obj);
	}
}

Creating and using an instance of this class is as simple as

public class TestClass
{
	private static readonly GeneralEqualityComparer<Foo> mFooComparer = 
		new GeneralEqualityComparer<Foo>(
			(x, y) => x.Id == y.Id,
			obj => obj.Id);

	public void GetDistinctFoos(IEnumerable<Foo> foos)
	{
		return foos.Distinct(mFooComparer);
	}
}

However, I was a bit embarrassed when I told my boss, Tony Beveridge, about this great use of generics and Funcs I had thought of, and he told me he had actually implemented exactly the same class some months ago.

Its worth noting that EqualityComparer<T>.Default provides a default implementation using the Equals() and GetHashCode() functions of T.

If you wanted to extend GenericEqualityComparer so you don’t have to provide an implementation for GetHashCode(), you can default mGetHashCodeFunc to always return zero. This will force the Equals function to always be called.

IEnumerable, ReadOnlyCollection, and the missing interface

I’ve been thinking on and off about the appropriate return signature for a method that returns an immutable list of objects, sparked off by reading Eric Lippert’s article, Arrays considered somewhat harmful, and my belief that the value of functional program and growth of parallelism means that immutability is desirable most of the time.

However, once you decide to return an immutable collection, what type do you return?

IEnumerable is not really appropriate. The problem is that an IEnumerable may possibly be only evaluable a single time, or may cost for every evaluation of it you perform. This means that you end up with consumers of your method having to use ToList() or ToArray() to flatten the IEnumerable before consuming it, which is wasteful when your method is returning a bounded collection.

So the only choice you have with .NET is ReadOnlyCollection. Which is okay, but not ideal, I believe.

Firstly this involves specifying a return signature as a concrete type. I prefer my method signatures to be interfaces when primitives are not being used, so they only specify behavour. This also means that you can’t return an object that doesn’t use ReadOnlyCollection as a base class.

The second issue is that ReadOnlyCollection implements ICollection and IList. Whilst the implementation of methods such as Add are explicit, the fact ReadOnlyCollection implements interfaces with methods that are invalid for it creates a class of bugs only findable at run time. Have a look at the following code.

public ReadOnlyCollection<object> GetReadOnly()
{
	ReadOnlyCollection<object> readOnly = new List<object>().AsReadOnly();
}

public void ShowIssue()
{
	ReadOnlyCollection<object> readOnly = GetReadOnly();
	// The next line prevented at compile time
	// readOnly.Add(new object());

	// However this code compiles, unfortunately
	IList<object> iList = GetReadOnly()
	iList.Add(new object()); // Fails with an exception at runtime
}

I think that it would have been sensible for .NET to have had an interface that inherits IEnumerable, that represents a readonly bounded collection, called something like IReadOnlyCollection. It would have a Count and allow read only access to the elements by index. ICollection and IList would both inherit this interface, and ReadOnlyCollection would be the implementation of it.

Update: Firstly, this article doesn’t really cover the differences between immutable and read only. The ReadOnlyCollection doesn’t provide any methods to change the collection membership. However ReadOnlyCollection is only a wrapper around the List, and it does not guarantee that the underlying list is not changed.

Links that may be of interest:

Parsing Event Logs using System.Diagnostics.Eventing.Reader

I’ve just had to analyse a bunch of Event Logs that contain exceptions, produced from a load testing exercise. I needed to turn them into a summary of the counts of each class of exception that occurred.

The exceptions belonging to each class of exception message didn’t generate exactly the same text in the event log data every time. So I decided the simplest way to categorise them was to have each category able to work out if it matches event log data using one of two criteria: either data contains a substring; or data matches a Regex. Flexible enough for the fairly simple requirements of this scenario.

After a little research, I found out about the existence of the System.Diagnostics.Eventing.Reader class, which will parse event logs from either the local or a remote computer, or from an EVTX file. The event logs I was parsing already needed to be saved into a file and archived after each run, so I made the parser use the archived files. I’m still keen to play with parsing logs directly off remote computers at some point.

Here’s some code. I haven’t included all the boring declaration of constants and the like in these code snippets, just the interesting bits. Some of it’s a wee bit hack-ish and not properly structured or parameterised; as this is a small utility for occasional use, the effort to make it really nice isn’t currently justified.

The EventLogMatcher class itself:

public class EventLogMatcher
{
  public EventLogMatcher(string dataContains, string dataRegex, string description)
  {
    if (!(string.IsNullOrEmpty(dataContains) ^ 
        string.IsNullOrEmpty(dataRegex)))
      throw new ArgumentException
        ("One and only one of dataContains and dataRegex must be specified");
    DataContains = string.IsNullOrEmpty(dataContains) 
      ? null
      : dataContains;
    DataRegex = string.IsNullOrEmpty(dataRegex) 
      ? null 
      : new Regex(dataRegex, RegexOptions.Singleline);
    Description = description;
  }

  public string DataContains { get; private set; }

  public Regex DataRegex { get; private set; }

  public string Description { get; private set; }

  public bool IsDataMatch(string data)
  {
    return (DataContains != null && data.Contains(DataContains)) || 
      (DataRegex != null && DataRegex.IsMatch(data));
  }
}

The main loop of the program itself – it processes all the EVTX files in the SourcePath directory:

foreach (string sourceFile in Directory.EnumerateFiles(SourcePath, SourcePattern))
{
  string outputFile = DeriveOutputFilename(sourceFile);
  IDictionary<EventLogMatcher, int>; logMatchCounts = GetInitialisedLogTypeCounts();
  List<UnmatchedEventLog> unmatchedLogs = new List<UnmatchedEventLog>();

  EventLogReader logReader = new EventLogReader(new EventLogQuery(sourceFile, PathType.FilePath));
  for (EventRecord eventInstance = logReader.ReadEvent(); eventInstance != null; 
    eventInstance = logReader.ReadEvent())
  {
    EventLogMatcher matcher = logMatchCounts.Keys.
	  SingleOrDefault(key => key.IsDataMatch(GetData(eventInstance)));
    if (matcher == null)
      unmatchedLogs.Add(ToUnmatchedLog(eventInstance));
    else
      logMatchCounts[matcher]++;
  }
  WriteMatchedResults(outputFile, logMatchCounts, unmatchedLogs);
};

I love using System.Xml.Linq. It makes parsing XML files really simple, and quite readable. Especially for a utility like this where proper error handling isn’t important.

The code for getting the dictionary of EventLogMatchers and counts:

private static IDictionary<EventLogMatcher, int> GetInitialisedLogTypeCounts()
  {
    return LogTypes.ToDictionary(type => type, type => 0);
  } 

  private static IEnumerable<EventLogMatcher> GetParsedLogTypes()
  {
    XDocument source = XDocument.Load(KnownEventLogTypesFileName);
    return source.Elements(EventLogsTypesXName).
      Elements(EventLogsTypeXName).
      Select(ParseAsLogMatcher);
  }

  private static EventLogMatcher ParseAsLogMatcher(XElement element)
  {
    string dataContains = element.GetValueOfOptionalAttribute(DataContainsXName);
    string dataRegex = element.GetValueOfOptionalAttribute(DataRegexXName);
    string description = element.Attribute(DescriptionXName).Value; 
    return new EventLogMatcher(dataContains, dataRegex, description, ignored, defaultReason);
  }

internal static class XElementExtensions
{
  public static string GetValueOfOptionalAttribute(this XElement element, XName attributeName)
  {
    XAttribute attribute = element.Attribute(attributeName);
    return attribute == null ? null : attribute.Value;
  }
}

The actual XML for KnownEventLogTypes.xml is structured like this

<?xml version="1.0" encoding="utf-8" ?>
<EventLogsTypes>
  <EventLogsType 
     dataContains="System.Net.WebException: The operation has timed out" 
     description="The operation has timed out" />
  <EventLogsType
     dataRegex="Internal failure Sorry, there was an error\. Please try again later.*AnAgent\.GetSomethingInteresting"
     description="Internal failure in AnAgent. GetSomethingInteresting" />
</EventLogsTypes>

Not rocket science, but simple and effective.

Next time around I’d look at Powershell to do this: Get-EventLog also looks like a simple way to deal with event logs. But I’m glad to have had the opportunity to learn about System.Diagnostics.Eventing.Reader.