Using JQuery to create a table with folding details rows

I improved an implementation of a table in which each row is clickable to toggle the visibility of a detail row below it. The previous implementation gave each row a numbered id, and applied a separate onClick function to each row. I realised a more efficient approach would be to apply appropriate classes to the table, and use a function that runs across the table and applies the appropriate event handlers.

The table is structured approximately like this (I’ve left out a header row and other complications).

<table id="summarydetailtable">
    <tr class="summary">
        <td>Value1</td>
        <td>Value2</td>
    </tr>
    <tr class="hide">
        <td colspan="2">Additional details</td>
    </tr>
    ... more rows here ...
</table>

Each summary row of the table has the class summary. The class hide is used to hide the details row – all the details rows start with this class set. When a summary row is clicked the first time, the class selected is applied to that row, and the class hide removed from its detail row (the row below it). When the row is clicked again, the class selected is removed from that row, and the class hide applied to the detail row.

I realised that the jQuery selector next() would make my job easy:

var select = function(control) {
    $(control)
        .addClass("selected")
        .next()
        .removeClass("hide");
}
var unselect = function(control) {
    $(control)
        .removeClass("selected")
        .next()
        .addClass("hide");
}

So to select a summary row, we apply the style to that row, move to the next control which is the next row, and show it.

To actually wire this up looks like this:

namespace.registerFoldoutList = function(tableId) {
    var tableSelector = "#" + tableId;
    var rowSelector = tableSelector + " tr.summary";

    var select = function(control) {
        $(control)
            .addClass("selected")
            .next()
            .removeClass("hide");
    }
    var unselect = function(control) {
        $(control)
            .removeClass("selected")
            .next()
            .addClass("hide");
    }

    $(document).ready(function() {
        $(rowSelector).click(function(e) {
            var control = $(this);
            if (control.hasClass("selected")) {
                unselect(control);
            } else {
                select(control);
            }
        });
    });
}

I’ve seen people saying things like “why use jQuery, when you can just use javascript”, and as javascript support gets more mature, this is a fair point. For me, the simplicity and ease of jQuery style coding appeals – I especially enjoy the way function calls are chained.

The next challenge was a change request to add an Expand All button to the header above the table. The markup is approximately like

<div class="heading_box">
    <span class="results">
        <a class="open expand_all" href="#">Expand all</a>
    </span>
    <h2>Table heading</h2>
</div>
<table id="summarydetailtable">
    <tr class="summary">
        <td>Value1</td>
        <td>Value2</td>
    </tr>
    <tr class="hide">
        <td colspan="2">Additional details</td>
    </tr>
    ... more rows here ...
</table>

The class of the Expand all link should toggle between open and close, and all the detail rows should be hidden or shown as appropriate.

Extending my original function to deal with Expand All proved pretty simple:

namespace.registerFoldoutList = function(tableId, registerExpandAll) {
    var tableSelector = "#" + tableId;
    var headingSelector = ".heading_box";
    var expandAllSelector = ".expand_all";
    var rowSelector = tableSelector + " tr.summary";

    var select = function(control) {
        $(control)
            .addClass("selected")
            .next()
            .removeClass("hide");
    }
    var unselect = function(control) {
        $(control)
            .removeClass("selected")
            .next()
            .addClass("hide");
    }

    $(document).ready(function() {
        if (registerExpandAll) {
            var expandAll = $(tableSelector)
                .prevAll(headingSelector).find(expandAllSelector);

            if (expandAll.length > 0) {
                $(expandAll).click(function() {
                    var control = $(this);
                    if (control.hasClass("open")) {
                        control
                            .removeClass("open")
                            .addClass("close")
                            .text("Collapse all");

                        $(rowSelector)
                            .each(function() { select(this); });
                    } else {
                        control
                            .removeClass("close")
                            .addClass("open")
                            .text("Expand all");

                        $(rowSelector)
                            .each(function() { unselect(this); });
                    }
                });
            }
        }

        $(rowSelector).click(function(e) {
            if ($.inArray(e.target.tagName, noFoldTags) < 0) {
                var control = $(this);
                if (control.hasClass("selected")) {
                    unselect(control);
                } else {
                    select(control);
                }
            }
        });
    });
}

In the selector to find the Expand All control, $(tableSelector).prevAll(headingSelector).find(expandAllSelector), I use prevAll because the header is not always the element immediately before the table. It’s pretty easy to see what’s going on. If an Expand All element is found, its click event is wired up.

On a click, the event handler works out if the Expand All control is currently open, updates its CSS class and text accordingly, and then selects or unselects all the summary rows as appropriate. The only bit of this code I don’t really like is having the text “Collapse all” and “Expand all” hardwired into the javascript. If I did this now, I’d consider attaching the text using HTML 5 data attributes, or find some similar scheme to include the text in the HTML of the table and header.

So the next time you’re looking at adding some javascript to a certain type of element on a page, rather than individually wiring up the elements, consider if you can apply an approach like this, where you add behaviour to elements that match a certain CSS selector. It’s much easier to understand, and makes for a better experience for your website users. Rather than lots of javascript being generated everytime your page is loaded, you can include the javascript in your common javascript file. The common javascript file will be cached, and only a small piece of javascript to run your function will be needed for each page. And rather than having javascript scattered throughout the pages of your site, it lives in a centralised location.

MSBuild and including extra files from multiple builds

Note I’ve edited this blog post, as the original version had whitespace in the DestinationRelativePath element, which doesn’t work – see http://stackoverflow.com/questions/8218374/msbuild-build-package-including-extra-files/ for more detail

I was involved in changing an existing web application to be packaged by MSDeploy recently.

The package had to include files from external directories, as there are images, CSS and Javascript files that come outside the web application project. I needed to work out how to do this with MSDeploy.

My starting point was Sayed’s excellent article on extending CopyAllFilesToSingleFolder. The rest of this article assume you’ve already read his article. My other major reference was MSBuild: By Example, particularly Understanding the Difference Between @ and %.

Sayed’s article didn’t give me two things I needed.

  • The ability to specify multiple sources, each with different target subdirectories.
  • The ability to check that the file doesn’t already exist in the target directory.

Working out just how to make multiple sources with different target subdirectories possible took quite some investigation, and trial and error with $, @ and %. I ended up with the following approach.

I have a Common.Targets target file that defines a number of useful shared targets, that I import into my projects as required.

Within the projects I need to copy custom files, I add the following two pieces of XML after the import of my Common.Targets file.

Firstly, I extend the CopyAllFilesToSingleFolderForPackageDependsOn PropertyGroup in the following way:

<PropertyGroup>
  <CopyAllFilesToSingleFolderForPackageDependsOn>
    DefineCustomFiles;
    CustomCollectFiles;
    $(CopyAllFilesToSingleFolderForPackageDependsOn);
  </CopyAllFilesToSingleFolderForPackageDependsOn>
</PropertyGroup>

Edit: CopyAllFilesToSingleFolderForPackageDependsOn has been renamed to CopyAllFilesToSingleFolderForMsdeployDependsOn in Visual Studio 2012. Thanks to Scott Stafford for pointing this out in his comment on the post.

This is very similar what Sayed does, except I have two targets defined in here. DefineCustomFiles creates an ItemGroup containing the files to be copied, and is defined in each project. An example looks like this.

<Target Name="DefineCustomFiles">
  <ItemGroup>
    <CustomFilesToInclude Include="$(IncludeRootDir)\images\**\*.*">
      <Dir>images</Dir>
    </CustomFilesToInclude>
    <CustomFilesToInclude Include="$(IncludeRootDir)\css\**\*.css">
      <Dir>css</Dir>
    </CustomFilesToInclude>
    <CustomFilesToInclude Include="$(IncludeRootDir)\includes\**\*.js">
      <Dir>includes</Dir>
    </CustomFilesToInclude>
  </ItemGroup>
</Target>

This defines an ItemGroup CustomFilesToInclude, that includes the files in each of the given directories, with each file having the metadata Dir set as shown.

CustomCollectFiles is defined in Common.Targets. It uses the CustomFilesToInclude ItemGroup defined in DefineCustomFiles to define FilesForPackagingFromProject, as Sayed’s example shows.

<Target Name="CustomCollectFiles">
  <ItemGroup>
    <FilesForPackagingFromProject Include="@(CustomFilesToInclude)">
      <DestinationRelativePath>%(CustomFilesToInclude.Dir)\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
    </FilesForPackagingFromProject>
  </ItemGroup>
</Target>

This looks very simple, but the combination of @ and % syntax was the hard part of the exercise. I couldn’t use <FilesForPackagingFromProject Include="%(CustomFiles.Identity)"> as per Sayed’s example; using the Identity metadata prevents access to the Dir metadata previously defined to specify the destination. I ended up needing to use Include="@(CustomFilesToInclude)" so I could access the metadata. It took some more trial and error to find the correct syntax to reference the metadata of each of the items of CustomFilesToInclude, using the % syntax as shown.

I then extended the CustomCollectFiles task to check for files that already existed in the target directory.

<Target Name="CustomCollectFiles">
  <ItemGroup>
    <FilesForPackagingFromProject Include="@(CustomFilesToInclude)">
      <DestinationRelativePath>%(CustomFilesToInclude.Dir)\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
    </FilesForPackagingFromProject>
    <FilesForPackagingFromProject Include="@(CustomFilesToIncludeSkipExistingCheck)">
      <DestinationRelativePath>%(CustomFilesToIncludeSkipExistingCheck.Dir)\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
    </FilesForPackagingFromProject>
  </ItemGroup>
  <Error Text="Custom file exists in project files already: %(CustomFilesToInclude.FullPath)"
    Condition="Exists('$(MainProjectRootDir)\%(CustomFilesToInclude.Dir)\%(RecursiveDir)%(Filename)%(Extension)')" />
</Target>

You can see how the syntax I use in the Condition of the Error is the same as the syntax used in the DestinationRelativePath relative path above.

I toyed with the idea of adding another piece of metadata to the items within CustomFilesToInclude to indicate whether the Exists check is applicable for that item or not. But after a little experimentation I decided it was simpler to use two item groups. Any items that do not require the check go into CustomFilesToIncludeSkipExistingCheck.

So at the end of this journey I have learnt a few things: Sayed is always your first resource to search if you have MSBuild questions; the MSDeploy pipeline is extensible in a useful fashion; MSBuild can be devilishly confusing and take a fair amount of trial and error for those who don’t intimately understand it, especially when you try and do anything complex with groups of files.

To use MSDeploy’s extensibility, you need to use MSBuild. However, when not using MSDeploy, I’d like to avoid MSBuild. So the next time I start a project that promises to have any complexity, I’ll be looking for a build framework that makes it easy to leave complex behaviour outside of MSBuild. I investigated psake after seeing that Rhino.Mocks uses it, and liked what I saw. I also like that you don’t have to learn a specific “make” language – psake’s decision to leverage an existing scripting language is smart and practical.

Parsing Event Logs using System.Diagnostics.Eventing.Reader

I’ve just had to analyse a bunch of Event Logs that contain exceptions, produced from a load testing exercise. I needed to turn them into a summary of the counts of each class of exception that occurred.

The exceptions belonging to each class of exception message didn’t generate exactly the same text in the event log data every time. So I decided the simplest way to categorise them was to have each category able to work out if it matches event log data using one of two criteria: either data contains a substring; or data matches a Regex. Flexible enough for the fairly simple requirements of this scenario.

After a little research, I found out about the existence of the System.Diagnostics.Eventing.Reader class, which will parse event logs from either the local or a remote computer, or from an EVTX file. The event logs I was parsing already needed to be saved into a file and archived after each run, so I made the parser use the archived files. I’m still keen to play with parsing logs directly off remote computers at some point.

Here’s some code. I haven’t included all the boring declaration of constants and the like in these code snippets, just the interesting bits. Some of it’s a wee bit hack-ish and not properly structured or parameterised; as this is a small utility for occasional use, the effort to make it really nice isn’t currently justified.

The EventLogMatcher class itself:

public class EventLogMatcher
{
  public EventLogMatcher(string dataContains, string dataRegex, string description)
  {
    if (!(string.IsNullOrEmpty(dataContains) ^ 
        string.IsNullOrEmpty(dataRegex)))
      throw new ArgumentException
        ("One and only one of dataContains and dataRegex must be specified");
    DataContains = string.IsNullOrEmpty(dataContains) 
      ? null
      : dataContains;
    DataRegex = string.IsNullOrEmpty(dataRegex) 
      ? null 
      : new Regex(dataRegex, RegexOptions.Singleline);
    Description = description;
  }

  public string DataContains { get; private set; }

  public Regex DataRegex { get; private set; }

  public string Description { get; private set; }

  public bool IsDataMatch(string data)
  {
    return (DataContains != null && data.Contains(DataContains)) || 
      (DataRegex != null && DataRegex.IsMatch(data));
  }
}

The main loop of the program itself – it processes all the EVTX files in the SourcePath directory:

foreach (string sourceFile in Directory.EnumerateFiles(SourcePath, SourcePattern))
{
  string outputFile = DeriveOutputFilename(sourceFile);
  IDictionary<EventLogMatcher, int>; logMatchCounts = GetInitialisedLogTypeCounts();
  List<UnmatchedEventLog> unmatchedLogs = new List<UnmatchedEventLog>();

  EventLogReader logReader = new EventLogReader(new EventLogQuery(sourceFile, PathType.FilePath));
  for (EventRecord eventInstance = logReader.ReadEvent(); eventInstance != null; 
    eventInstance = logReader.ReadEvent())
  {
    EventLogMatcher matcher = logMatchCounts.Keys.
	  SingleOrDefault(key => key.IsDataMatch(GetData(eventInstance)));
    if (matcher == null)
      unmatchedLogs.Add(ToUnmatchedLog(eventInstance));
    else
      logMatchCounts[matcher]++;
  }
  WriteMatchedResults(outputFile, logMatchCounts, unmatchedLogs);
};

I love using System.Xml.Linq. It makes parsing XML files really simple, and quite readable. Especially for a utility like this where proper error handling isn’t important.

The code for getting the dictionary of EventLogMatchers and counts:

private static IDictionary<EventLogMatcher, int> GetInitialisedLogTypeCounts()
  {
    return LogTypes.ToDictionary(type => type, type => 0);
  } 

  private static IEnumerable<EventLogMatcher> GetParsedLogTypes()
  {
    XDocument source = XDocument.Load(KnownEventLogTypesFileName);
    return source.Elements(EventLogsTypesXName).
      Elements(EventLogsTypeXName).
      Select(ParseAsLogMatcher);
  }

  private static EventLogMatcher ParseAsLogMatcher(XElement element)
  {
    string dataContains = element.GetValueOfOptionalAttribute(DataContainsXName);
    string dataRegex = element.GetValueOfOptionalAttribute(DataRegexXName);
    string description = element.Attribute(DescriptionXName).Value; 
    return new EventLogMatcher(dataContains, dataRegex, description, ignored, defaultReason);
  }

internal static class XElementExtensions
{
  public static string GetValueOfOptionalAttribute(this XElement element, XName attributeName)
  {
    XAttribute attribute = element.Attribute(attributeName);
    return attribute == null ? null : attribute.Value;
  }
}

The actual XML for KnownEventLogTypes.xml is structured like this

<?xml version="1.0" encoding="utf-8" ?>
<EventLogsTypes>
  <EventLogsType 
     dataContains="System.Net.WebException: The operation has timed out" 
     description="The operation has timed out" />
  <EventLogsType
     dataRegex="Internal failure Sorry, there was an error\. Please try again later.*AnAgent\.GetSomethingInteresting"
     description="Internal failure in AnAgent. GetSomethingInteresting" />
</EventLogsTypes>

Not rocket science, but simple and effective.

Next time around I’d look at Powershell to do this: Get-EventLog also looks like a simple way to deal with event logs. But I’m glad to have had the opportunity to learn about System.Diagnostics.Eventing.Reader.