Posts in  code

9.24.2013

Map Then Collect

Iterating through a collection of data and mapping to a new type is a rather routine development task. Here are various ways to do this. At the bottom I have a list of things to consider. What conclusions do you come to?

Note: in LINQ the map operator is named Select

Mapping

for Loop

public List<Output> ForLoop()
{
    var outputs = new List<Output>();
    var inputs = GetInputs();
    for (var i = 0; i < inputs.Count(); i++)
    {
        var input = inputs[i];
        var output = new Output();
        output.Id = Convert.ToInt32(input.Id);
        outputs.Add(output);
    }
    return outputs;
}

foreach Loop

public List<Output> ForEachLoop()
{
    var outputs = new List<Output>();
    foreach (var input in GetInputs())
    {
        var output = new Output();
        output.Id = Convert.ToInt32(input.Id);
        outputs.Add(output);
    }
    return outputs;
}

Functional ForEach

public List<Output> FunctionalForEachLoop()
{
    var outputs = new List<Output>();
    GetInputs()
        .ForEach(input => MapAndAdd(input, outputs));
    return outputs;
}

private void MapAndAdd(Input input, List<Output> outputs)
{
    var output = new Output();
    output.Id = Convert.ToInt32(input.Id);
    outputs.Add(output);
}

Functional Map Then Collect

public List<Output> FunctionalMapThenCollect()
{
    var outputs = GetInputs()
        .Select(input =>
        {
            var output = new Output();
            output.Id = Convert.ToInt32(input.Id);
            return output;
        })
        .ToList();
    return outputs;
}

Functional Constructor Map Then Collect

public List<Output> FunctionalConstructorMapThenCollect()
{
    var outputs = GetInputs()
        .Select(input => new Output(input))
        .ToList();
    return outputs;
}

Adding Filtering

foreach With Filter

public List<Output> ForEachWithFilter()
{
    var outputs = new List<Output>();
    foreach (var input in GetInputs())
    {
        int id;
        if (!Int32.TryParse(input.Id, out id))
        {
            continue;
        }

        var output = new Output();
        output.Id = id;
        outputs.Add(output);
    }
    return outputs;
}

Functional Map Then Filter Then Collect

public List<Output> FunctionalMapThenFilterThenCollect()
{
    var outputs = GetInputs()
        .Where(InputHasValidId)
        .Select(input => new Output(input))
        .ToList();
    return outputs;
}

private bool InputHasValidId(Input input)
{
    int id;
    return Int32.TryParse(input.Id, out id);
}

Considerations

  • Readable
  • Understanding / Confusing
  • Composition
  • Extensible
  • Intent
  • What is explicit?
  • What is implicit?

For reference

public class Input
{
    public string Id { get; set; }
}

public class Output
{
    public Output()
    {
    }

    public Output(Input input)
    {
        Id = Convert.ToInt32(input.Id);
    }

    public int Id { get; set; }
}

12.18.2012

Why requirejs?

Why are we using requirejs? I've even asked it of myself. For me, organizing large javascript projects and managing dependencies led to it. There are lots of great posts about how to use requirejs but I wanted to focus on WHY.

Explicit Dependency Specifications

Open up any significant sized javascript file with 100+ lines of code, even 50, and tell me, within 5 seconds, what dependencies it has. Can't do it? You can with modules and requirejs.

...
function View() {
    this.name = ko.observable('bob');
    this.age = ko.observable(20);
    ...
}
var view = new View();
ko.applyBindings(view, $('viewPlaceHolder')[0]);    
...

versus

require(['jquery', 'knockout'], function($, ko) {
    // who knows whats in here, but I do know what it needs
});


How often do you use this information as a developer? Think about the value in not spending 5 minutes everytime you need to know, or more likely, the value in not creating a bug because you didn't take the time to find them all or you missed one.

Explicit Export Specifications

Along with explicit dependencies, modules have an explicit mechanism to define what they export. We can quickly read a module and know what it provides by looking at the return statement, instead of wondering what globals to access. define makes explicit that it is providing an interface to consumers.

define('math', ['dependencyA', 'dependencyB'], function(dependencyA, dependencyB) {
    // horrible things to local scope
    return {
            min: min,
            max: max,
            average: average
    };
});

Avoid Script Tag Soup

<script src="scripts/utils.js">
<script src="scripts/numbers.js">
<script src="scripts/common.js">
<script src="scripts/formatting.js">
<script src="scripts/thing.js">
<script src="scripts/other.js">
<script src="scripts/site.js">

Did I type the path correctly? Are they ordered properly? Did someone change the order for a reason? Did they leave a comment, is it still applicable? What if I need to refactor one of them and change its position in the list?

With requirejs we can simply work on the dependency lists alone:

define('utils', [], function(){
    ...
});

define('numbers', [], function(){
    ...
});

define('common', ['utils', 'numbers'], function(){
    ...
});

define('formatting', [], function(){
    ...
});

define('thing', ['formatting'], function(){
    ...
});

...

This makes the dependencies explicit as we've shown above and doesn't hide meaning in an ordered list of script tags that are likely copy/pasted every time someone needs to load a set of dependencies, because who is going to manage this list and limit it to only the pieces they need, in every view of every application?

Confidence in Refactoring

Unlike in static typed languages, I can't just build to see if I messed up a dependency.

If dependency management is automatic, and we have an explicit, consistent specification of what interface we export from a module, and we use a function scope to organize our locals, it becomes very easy to make changes within a module, to split modules in parts etc. Our feeble human minds can focus on one module, but have a very difficult time taking into consideration the entire scope of the application. It's not that you can't practice this with function scopes alone, but it becomes a requirement with requirejs.

Shines a Light on Circular Dependencies and Intent

It's very easy to create circular dependencies in javascript, refactoring non module code to module code lets those issues bubble up to the dependency list and makes them apparent. It also helps other developers see how modules were intended to be used together, so they don't make assumptions and violate internal dependencies. You simply cannot see this stuff without a modular approach.

Simplified Testing

This falls back to script tag soup. When we setup tests we have to worry about loading scripts, even if it's just the module we are testing. We have three choices, take your pick:

  • Copy/paste a shared list between test fixtures, suffering testing speed and potentially breaking tests when updating this list.
  • Manually create this list for each test fixture, by manually scanning for dependencies, see above.
  • Use requirejs and forget about it

Simplifies Sharing Code

We have lots of handy JS we create, but sometimes we don't share it because it's too difficult to know what it needs. With requirejs, it's easy to see what needs to go with a piece of code, to use it elsewhere.

Works Client Side

Mobile and other solutions don't always end up using a server side technology and even if they do, it varies, so a module framework on the client is a huge benefit. Especially for organizing complex mobile applications.

Optimized Build

If desired, you can combine and minify your modules into one script for an optimized experience. The great part is you can use requirejs in dev without this and only add it as a deploy step, thus alleviating the need to build every time you make a change in dev.

Other benefits

  • Only load what you need, on demand and asynchronously!
  • Lazy loaded scripts
  • Although something is coming in ES harmony

    unless someone can get Microsoft to adopt rapid IE updates, even for old IEs, AMD will be around for a long time - James Burke

  • Consistent approach to mocking modules if testing via stubbing.

Articles against modules

Articles supporting modules

12.12.2012

Consistent Formatting via Automatic Tools

Writing is a means to convey information. Most languages, human or code, have a great deal of expressiveness built in by default. Format is expressive too. When format changes, it draws our mental attention to it, thus we should use it when we want to cause a shift in attention, like when a direct quote is italicized, or when we begin a new thought with a new paragraph. However, if we are reckless with our formatting, we can easily confuse readers. If we take the time to keep a consistent format and only change when we intend to, we are much more likely to convey our message to the reader.

Be consistent

It's impossible for the world, let alone a single person, to agree on a specific format. However, within a context, I think most would at least agree to be consistent. I would actually argue that most of us don't care about the specifics so long as it's easy to be consistent. The context is subjective, some examples might be a single file, a series of blog posts, or a particular project at work. The context is actually part of the specifics, so be consistent with it too!

Automate consistency

Manual enforcement is a disaster waiting to happen and it's a waste of time. There are lots of tools to help detect and many to help automatically fix inconsistencies. Even the most basic word processor can determine if you didn't put in a double space at the start of a sentence, and add it if desired. If paragraphs are indented, this can be automated too.

Computer languages

When starting a project, make sure you have an IDE or a tool to format your code. Here are some things to help evaluate a particular tool:

  • Is it integrated with how you write your code?
  • Can you easily execute (shortcut key) the cleanup on the current file(s) or a text selection?
  • Can you version the configuration of specific rules within a context (line wrapping, spacing, indentation)?
    • This allows others to automatically work in a new context, without needing to know the specifics.
    • WebStorm is a fantastic example of an IDE where these settings can be shared with the team.
    • The specifics of format are an artifact of the project, it evolves over time, so you should version that if you want to understand historical formats.
  • Are all of the languages covered?
    • Often, I see people clean up the primary language (c#) but not html and javascript.

Case Study: Blogging

As I was writing this, I realized how crappy it is to format this blog post. So I moved it to markdown, where I can focus on the content and not the presentation, where there are specific conventions already established to automatically format the content. For now I'm converting that to html to post to my blog, I would like to automate that step and wire it into a github repository so it's easy for me to blog about new things and not worry about the publishing step.

12.11.2012

KISS & AJAX Deletes

I run across this type of code often (ASP.Net MVC)

[HttpPost]
public string Delete(ObjectId id)
{
    var record = _Database.Get<Record>(id);
    if (record == null)
    {
        return "No matching record!";
    }
    _Database.Remove<Record>(id);
    return "Deleted record!";
}

If the user is interested in deleting the record, does it matter if someone else beat them to it?

[HttpPost]
public string Delete(ObjectId id)
{
    _Database.Remove<Record>(id);
    return "Deleted record!";
}

Isn't the HTTP 200 OK status code enough to indicate success, the user or the software knows the context of the request it made:

[HttpPost]
public void Delete(ObjectId id)
{
    _Database.Remove<Record>(id);
}

Simple things to think about when reviewing code for simplicity, one line of code is much easier to maintain than seven.

7.23.2010

What we are doing with HtmlTags Part 2 : Form Fields

Most simple forms we write, especially in LOB applications, are repetitive sections of inputs and/or displays. Take a look at an exmaple from the MVC Music Store sample, the album editor template, notice anything repetitive?

<%@ Import Namespace="MvcMusicStore"%>

<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<MvcMusicStore.Models.Album>" %>

<script src="/Scripts/MicrosoftAjax.js" type="text/javascript"></script>
<script src="/Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script>
<script src="/Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script>

<p>
    <%: Html.LabelFor(model => model.Title)%>
    <%: Html.TextBoxFor(model => model.Title)%>
    <%: Html.ValidationMessageFor(model => model.Title)%>
</p>
<p>
    <%: Html.LabelFor(model => model.Price)%>
    <%: Html.TextBoxFor(model => model.Price)%>
    <%: Html.ValidationMessageFor(model => model.Price)%>
</p>
<p>
    <%: Html.LabelFor(model => model.AlbumArtUrl)%>
    <%: Html.TextBoxFor(model => model.AlbumArtUrl)%>
    <%: Html.ValidationMessageFor(model => model.AlbumArtUrl)%>
</p>
<p>
    <%: Html.LabelFor(model => model.Artist)%>
    <%: Html.DropDownList("ArtistId", new SelectList(ViewData["Artists"] as IEnumerable, "ArtistId", "Name", Model.ArtistId))%>
</p>
<p>
    <%: Html.LabelFor(model => model.Genre)%>
    <%: Html.DropDownList("GenreId", new SelectList(ViewData["Genres"] as IEnumerable, "GenreId", "Name", Model.GenreId))%>
</p>

It didn’t take me long to notice that the duplication was driving me absolutely crazy! Yet, the community is hailing these template helpers as the next best thing since web forms and for the life of me I have no idea why! Why stop at the abstraction of DisplayFor, InputFor and LabelFor? Our views were suffering the same problem, even with a spark and HtmlTags spin:

<div class="fields">
  !{Html.LabelFor(m => m.Quantity).AddClass("label")}
  !{Html.InputFor(m => m.Quantity).AddClass("field")}
</div>

<div class="fields">
  !{Html.LabelFor(m => m.Price).AddClass("label")}
  !{Html.InputFor(m => m.Price).AddClass("field")}
</div>

Template Convention

Both frameworks (HtmlTags or MVC2 Templates) can be extended to support the abstraction we produced, but HtmlTags is geared to do this in a much cleaner and flexible fashion. We created the idea of an edit template that includes the label and input (and validation if you so desire), all of which gets nested in some container tag. We rolled all of this up into two conventions: EditTemplateFor and DisplayTemplateFor. The difference being whether or not the field on the form is editable. I’ll admit that we could probably roll this up further into a TemplateFor convention that has configurable builders much like DisplayFor/InputFor/LabelFor in HtmlTags, but right now these are static conventions in that they don’t have hot swappable builders. That is an effort for another day :)

So here is what the views would look like now (sigh of relief):

<%: Html.EditTemplateFor(m => m.Title) %>
<%: Html.EditTemplateFor(m => m.Price) %>
<%: Html.EditTemplateFor(m => m.AlbumArtUrl)%>
<%: Html.EditTemplateFor(m => m.Artist)%>
<%: Html.EditTemplateFor(m => m.Genre)%>

!{Html.EditTemplateFor(m => m.Quantity)}
!{Html.EditTemplateFor(m => m.Price)}

How it works

The following is the EditeTemplateFor convention, the display version only varies by calling DisplayFor instead of InputFor. We simply build our label, adding a class that is statically configurable. Then we take a list of fields, in the event we have more than one falling under a single label, and for each we wrap it in a div with the class "field" and then we put the label and inputs into a div with the class "fields."
public static class TemplateHelpers
{
  public static string FieldDivClass = "field";
  public static string FieldsDivClass = "fields";
  public static string LabelClass = "label";

  public static HtmlTag EditTemplateFor<T>(this T model, params Expression<Func<T, object>>[] fieldSelectors) where T : class
  {
    Contract.Requires(fieldSelectors.Any());

    var label = model.LabelFor(fieldSelectors[0]).AddClass(LabelClass);

    var fields = fieldSelectors
      .Select(f => Tags.Div
              .AddClass(FieldDivClass)
              .Nest(model.InputFor(f)));

    return Tags.Div
      .AddClass(FieldsDivClass)
      .Nest(label)
      .Nest(fields.ToArray());      
  }

  public static HtmlTag EditTemplateFor<T>(this HtmlHelper<T> helper,
                       params Expression<Func<T, object>>[] fieldSelectors) where T : class
  {
    return EditTemplateFor(helper.ViewData.Model, fieldSelectors);
  }
Here is an alteration of the above to work with the MVC Music store ablum editor:
public static class TemplateHelpers
{
  public static string FieldDivClass = "field";
  public static string FieldsDivClass = "fields";
  public static string LabelClass = "label";
  public static string ValidationMessageClass = "validation-message";

  public static HtmlTag EditTemplateFor<T>(this HtmlHelper helper, Expression<Func<T, object>> field) where T : class    {
    var label = helper.LabelFor(field);
    var input = helper.InputFor(field);
    var validation = Tags.Span.AddClass(ValidationMessageClass)

    return Tags.Paragraph
      .Nest(label,field,validation);      
  }
}

Reproducing this with MVC2 Templates

 public static MvcHtmlString EditTemplateFor<T>(this HtmlHelper<T> helper, Expression<Func<T, object>> field) where T : class
  {
    var label = helper.LabelFor(field);
    var input = helper.EditorFor(field);
    var validation = helper.ValidationMessageFor(field);

    var template = String.Format("<p>{0}{1}{2}</p>", label, input, validation);
    return MvcHtmlString.Create(template);
  }

  public static MvcHtmlString EditTemplateFor2<T>(this HtmlHelper<T> helper, Expression<Func<T, object>> field) where T : class
  {
    var label = helper.LabelFor(field);
    var input = helper.EditorFor(field);

    var template = String.Format("<div class=\"fields\">{0}<div class=\"field\">{1}</div></div>", label, input);
    return MvcHtmlString.Create(template);
  }

The Verdict

This was my best attempt at producing the same behavior with ASP.Net MVC templates, and it's already run into some serious issues.

  1. I have to work with strings.
    1. Html is not a string, so why should I have to work with it as such.
    2. The MVC2 template return an MvcHtmlString unlike HtmlTags which returns HtmlTag
    3. I could improve this with an HtmlTextWriter but that’s not part of the template abstraction and is yet another burden to include. HtmlTags supports a model OOB!
  2. I cannot modify my label to add the class “label”. Instead, I would have to wrap the label with another tag to get a class applied or I would have to alter my label template and then that class would be applied anytime I used a label in my entire application, not just in a form field.
  3. The HtmlTags version is much more readable, maintainable and representative of the structure of the html.
  4. I cannot plug this into the scaffolding in ASP.Net MVC with EditorForModel, this is the failure in their approach to scaffolding, since it’s built on an unchangeable model with no configuration and modification, it only works for the narrow out of the box case (smells of web forms days if you ask me). In a subsequent blog post I’ll show how we can continue to build up, instead of top down, to produce flexible scaffolding on top of HtmlTags!
  5. I can literally “unit” test the HtmlTags version (if it added value to test it) without spinning up a view engine to render the result, and I don’t have to parse html!

-Wes

7.23.2010

What we are doing with HtmlTags Part 1 : Why HtmlTags?

Back in January, Jeremy Miller posted a nice article on HtmlTags: Shrink your Views with FubuMVC Html Conventions. We were immediately in love with the idea and have spent several months adapting the conventions to work with our ASP.Net MVC applications. I was having a conversation with Ryan recently, reflecting on how far we’ve come and how we had no vision of that when we first read that article. I want to share some of that, so I will be working on a series of blog posts to show “What we are doing with HtmlTags.” But first: why?

A lot of buzz has been generated around reusability and views, particularly html. Html being a highly compositional language, but lacking any ability for reuse, has led to copy paste hell. Several different philosophies are employed in View Engines (VE) to try to address this issue and recently MVC2 was released with it’s templated helpers. The problem with reusing html directly is that lacks the benefits of separating the concern of what we need from the html (building it) and generating it, classically, the difference between “What” I want versus “How” I get it. The power of the Linq abstraction shows the benefits of separating “What” versus “How.” HtmlTags is another model based approach that separates the “What” and “How.” It goes even further by allowing the “How” to be configured via conventions. Here is a list of the benefits this separation has brought to our team:

  1. View engine agnostic. HtmlTags is not coupled to any particular VE. It can even be returned from a controller action and rendered directly to the response!
  2. Testability with out spinning up a costly integration environment, I can write assertions directly against the model!
    1. Html is a language that lacks the ability to query it in any reasonable fashion.
    2. We integrate security as an aspect to our conventions that build our Html. Now, we can write a simple test to ensure that the security shows/hides elements in the model by simply asserting the lack of their presence! Try parsing html to do this after you spin up an integration test to access the view output!
  3. Translation: HtmlTags is a model that is easily traversed. We leverage this to transform the html model into an export model that can then be translated to csv, pdf and other formats. Now we get exports for free on any view result simply by writing the guts of the view in HtmlTags!
  4. The ability to translate from our data model to the html model directly, in c#, instead of using a view engine with a mish mash of concerns (html & code).
    1. This avoids reshaping of our data to get it to easily work with our view engine. Instead we can translate directly, on our view models, and have the full power of c# at our finger tips.
    2. This also helps with refactoring, where the story is still very poor with VE content.
    3. Simply put: avoids the html / code disparity
  5. Composable conventions, we build upon the HtmlTags DisplayFor, LabelFor and InputFor to build up higher level concepts in our application, like a form field (composed of fields for a label, input, validation etc). This gives us a on stop spot to restructure the html for every form in our application, no more shotgun surgery on views!
    1. We also leverage concepts around menus, buttons, forms etc.
  6. Reusable across projects: we don’t have to try to copy/paste a bunch of templates, we simply drop in a dll with default conventions and configure from there.
  7. Avoid fat fingering html tag names, attributes etc. We don’t have to think as much about html anymore, amazing concept! Templated html still has issues with fat fingering as <div> is all over whereas we have a nice Tags.Div static entry point to generate a div tag. We do this for all tags we use. This is the concept of reuse between templates that is rather lacking from MVC2 templates without bee sting hell. This is a great example of leveraging a statically typed language to fail fast, we don’t have to render a few to find out if we fat fingered some html. All html tags, attributes etc are kept as constants collections.
  8. Avoid hard coded css classes: we make our class names configurable and put them right with the builder so any project can override them. Try doing that without a bunch of bee stings in a templated helper and a nightmare of classes coupled to the template to provide the constants in code. This reminds me of code behind hell from web forms days. Then try reusing that across projects.
    1. I’ve seen some really scary stuff in samples with templated helpers where people try to access attributes. property types, etc and conditionally render sections. The only alternative is to put that code into helpers that are separated from the view and lead to a coupling mess when trying to reuse the templates across projects.

The next set of posts will cover examples of how we are using HtmlTags and how it’s paying dividends.

-Wes

2.25.2010

Sexy Windsor Registrations

After working with FluentNHibernate and seeing examples of registries in StructureMap, I started craving the same thing for my registrations with Windsor. Our registrations often look like the following:

public static void Register(IWindsorContainer container)
{
  container.Register(Component.For<IFoo>().ImplementedBy<Foo>());
  container.AddComponent<IFoo, Foo>();
  ...
}

There are a few things I don’t like about this approach:

  1. Passing a container around through static methods is a hack.
  2. Ceremony of “container.” calls clutter the registry and impede readability.
  3. Why do I need so much ceremony to get to Component.For? “container.Register(Component.For” is tedious!

Note: this code is available on github. The registries are tested with nunit, so you can drop in whatever version of windsor and verify it works.

IWindsorInstaller to the static rescue

A registry needs a uniform entry point. Enter IWindsorInstaller, a rather undocumented feature that deserves more attention.

 public interface IWindsorInstaller
  {
    void Install(IWindsorContainer container, IConfigurationStore store);
  }

  // Container entry point
  container.Install(IWindsorInstaller installer);

The container has an install method that takes an instance of IWindsorInstaller. See this post for more details about IWindsorInstaller.

Adapting to Component.For

Now to fix readability, what if we could:
  public class SampleRegistry : RegistryBase
  {
    public SampleRegistry()
    {
      For<IFoo>().ImplementedBy<Foo>().LifeStyle.Singleton();
      For<IFoo>().ImplementedBy<Foo>();
    }
  }
To pull this off, the RegistryBase keeps a collection of registrations and adapts to the Component.For entry point to capture the registration before returning it. These registrations are stored in a Steps collection, more on why this isn't called Registrations later.
  public class RegistryBase : IWindsorInstaller
  {
    ...

    public ComponentRegistration<S> For<S>()
    {
      var registration = Component.For<S>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F>()
    {
      var registration = Component.For<S, F>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2>()
    {
      var registration = Component.For<S, F1, F2>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2, F3>()
    {
      var registration = Component.For<S, F1, F2, F3>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2, F3, F4>()
    {
      var registration = Component.For<S, F1, F2, F3, F4>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration For(params Type[] types)
    {
      var registration = Component.For(types);
      Steps.Add(registration);
      return registration;
    }
    ...
  }

Installation

RegistryBase implements the IWindsorInstaller.Install method to add registrations to the container.

  public virtual void Install(IWindsorContainer container, IConfigurationStore store)
  {
    Steps.ForEach(s => container.Register(s));
  }

The application bootstrapper adds registries. This example assumes all registries are loaded into the container, though they probably would never have dependencies (chicken/egg paradox). I just like to abuse my container :)

  private static void LoadRegistries(IWindsorContainer container)
  {
    var registries = container.ResolveAll<IWindsorInstaller>();
    registries.ForEach(r => container.Install(r));
  }

Other adaptations

The registry also adapts to a few other entry points and captures their registrations.

  1. container.AddComponent
  2. container.AddFacility
  3. AllTypes.FromAssemblyNamed
  4. AllTypes.FromAssembly
  5. AllTypes.FromAssemblyContaining

Adapting to the unknown

In the event there is an entry point missing from the registry, it has a Custom method that takes an Action. This allows for access to the container as usual. This is captured as a deferred action that won't be executed until the registry is installed in the container. Hence the name "Steps" for registrations and custom actions.

Show me the money

Here is a sample of different useages of the registry, of course the entire fluent registration API is at your finger tips.

  public class SampleRegistry : RegistryBase
  {
    public SampleRegistry()
    {
      // Register a singleton
      For<IFoo>().ImplementedBy<Foo>().LifeStyle.Singleton(); // Extension methods to call property.

      // Register a single item
      For<IFoo>().ImplementedBy<Foo>();
      For(typeof (IFoo)).ImplementedBy<Foo>();
      AddComponent<IFoo, Foo>();

      // Custom actions if you want to access the original container API, with deferred installation via lambda expressions
      Custom(c => c.AddComponent<IFoo, Foo>());
      Custom(c => c.Register(Component.For<IFoo>().ImplementedBy<Foo>()));

      // Scan for types
      FromAssemblyContaining<SampleRegistry>().BasedOn<IFoo>();
      FromAssemblyContaining(typeof (SampleRegistry)).BasedOn<IFoo>();
      FromAssemblyNamed("GotFour.Windsor.Tests").BasedOn<IFoo>();
      FromAssembly(typeof (SampleRegistry).Assembly).BasedOn<IFoo>();

      // Forwarding types
      For<IFoo, Foo>().ImplementedBy<Foo>();
      For<IFoo, Foo, FooBar>().ImplementedBy<FooBar>();
      For<IFoo, Foo, FooBar, FooBar2>().ImplementedBy<FooBar2>();
      For<IFoo, Foo, FooBar, FooBar2, FooBar3>().ImplementedBy<FooBar3>();

      // Adding facilities
      AddFacility<StartableFacility>();
    }
  }

Notes: I have tested capturing registrations for all of the above scenarios but I suppose there might be some deep dark portion of the registration API that might not work. This would happen if something creates a brand new registration, independent of the original captured one. I have yet to run into this, the design of the api is pretty rock solid as a builder that collects state. I left out AllTypes.Of.From since Of doesn't return a registration, it is simply a call to AllTypes.FromAssemblyXyz().BasedOn() reversed and really isn't very helpful.

ExtendedRegistryBase : RegistryBase

I added an extended set of registration points with ExtendedRegistryBase. This adds another layer of new fluent registrations for common scenarios, often involving convention based registration :) If you have additions, please add them in the comments and I will get them added.

  public class SampleExtendedRegistry : ExtendedRegistryBase
  {
    public SampleExtendedRegistry()
    {
      // Same as scanning above in SampleRegistry but much cleaner!
      ScanMyAssemblyFor<IFoo>();

      // Scan for all services of the pattern Service : IService
      ScanMyAssembly(Conventions.FirstInterfaceIsIName);

      // Scan for all services of the pattern Whatever : IService (register with first interface)
      ScanMyAssembly(Conventions.FirstInterface);

      // Next we could use some attributes to discover services, to register imports / exports :)
    }
  }

This registry class helps avoid the static calls to registries from my applications. Now I can scan for registries of a known type and install them into the container. The registries are much more readable with the ceremony gone. I know there was talk of adding something like this to the next version of Windsor/MicroKernel. I hope this is the direction that effort is headed towards. In the mean time enjoy this as a fix to the cravings for a cleaner registry. I typically add one of these per project and let it control registration within that layer.


Shout it

kick it on DotNetKicks.com

-Wes

2.5.2010

IJoinedFilter Part 3: IFilterPriority

Note: This article is a continuation of the series on IJoinedFilter:

  1. IJoinedFilter
  2. AutoMapFilter meet IJoinedFilter

In-lining mapping and injection aspects – yuck!

After implementing several aspects of controller actions as compos-able filters, it became apparent that controlling the order would be import. One of the latest additions is a filter to build up a view model property with data aspects. An example would be a select list for states on a person edit view. Normally, the states would be fetched and set on the model in the controller action:

  public class PriorityController : Controller
  {
    public ActionResult Injected()
    {
      var person = new Person
                   {
                    Name = "John Doe",
                    BirthDate = new DateTime(1980, 1, 1),
                    State = StatesService.GetStates()[1]
                   };
      var personViewModel = new PersonViewModel(person);
      person.AllStates = StatesService.GetStates();
      return View(person);
    }
  }

  public class StatesService
  {
    public static IList<State> GetStates()
    {
      return new List<State>
             {
              new State {Name = "Nebraska", Id = 1},
              new State {Name = "Iowa", Id = 2},
              new State {Name = "Kansas", Id = 3}
             };
    }
  }

  public class Person
  {
    public string Name { get; set; }
    public DateTime BirthDate { get; set; }
    public State State { get; set; }
  }

  public class State
  {
    public int Id { get; set; }
    public string Name { get; set; }
  }

  public class PersonViewModel
  {
    public string Name { get; set; }
    public string BirthDate { get; set; }
    public string StateId { get; set; }
    public IList<State> AllStates { get; set; }
  }

Extracting the injection aspect

This leads to repetitive code every time a view model requires this data. Furthermore, we are forced to create the appropriate view model inside the controller action instead of relying on the mapping filter. To separate concerns we created a filter to find these data aspects and inject the values. Now, we simply add a property to the view model and the filter will take care of the rest!

 
  public class InjectStatesFilter : ViewModelInjectFilter
  {
    protected override Func<PropertyInfo,bool> InjectProperty()
    {
      return p => p.PropertyType == typeof (IList<State>);
    }

    protected override object WithValue()
    {
      return StatesService.GetStates();
    }
  }

  public abstract class ViewModelInjectFilter : IActionFilter
  {
    public virtual void OnActionExecuting(ActionExecutingContext filterContext)
    {
    }

    public virtual void OnActionExecuted(ActionExecutedContext filterContext)
    {
      var viewResult = filterContext.Result as ViewResultBase;
      if (viewResult == null || viewResult.ViewData.Model == null)
      {
        return;
      }
      var model = viewResult.ViewData.Model;
      var property = model.GetType().GetProperties().FirstOrDefault(InjectProperty());
      if (property == null)
      {
        return;
      }
      var value = property.GetValue(model, null);
      if (value == null)
      {
        property.SetValue(model, WithValue(), null);
      }
    }

    /// <summary>
    /// The criteria to use to find the property to inject.0
    /// </summary>
    /// <returns></returns>
    protected abstract Func<PropertyInfo,bool> InjectProperty();

    /// <summary>
    /// The value to inject, only queried if a matching, null property is found.
    /// </summary>
    /// <returns></returns>
    protected abstract object WithValue();
  }

Going back to mapping and injecting filters with this new approach helps remove two aspects of duplication from the controller action:

 public class PriorityController : Controller
  {
    public ActionResult Injected()
    {
      var person = new Person
                   {
                    Name = "John Doe",
                    BirthDate = new DateTime(1980, 1, 1),
                    State = StatesService.GetStates()[1]
                   };

      return View(person);
    }
  }

IFilterPriority - Back to the aspect!

In the example, PersonViewModel can be built up with a list of states, if the mapping filter is executed first. To guarantee this, I have added an interface IFilterPriority with a routine GetOrder() to the IJoinedFilter framework. It returns an integer representing the priority of the filter in the execution chain. I chose an integer to mirror FilterAttribute.Order in the MVC framework, which sadly was not made into a compo-sable extensibility point.

 public interface IFilterPriority
  {
    int GetOrder();
  }

JoinedFilterLocator has been modified to add filters based on this order to the FilterInfo result. The actual execution is dependent on the MVC pipeline and follows the following rules when using JoinedFilterLocator:

  • Action Filters
    • High->Low for OnActionExecuting
    • Low->High for OnActionExecuted
  • Authorization Filters - High->Low
  • Exception Filters - High->Low
  • Result Filters
    • High->Low for OnResultExecuting
    • High->Low for OnResultExecuted
 public virtual FilterInfo FindFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor)
  {
    var filters = new FilterInfo();
    var joinedFilters = JoinedFilters
      .Where(i => i.JoinsTo(controllerContext, actionDescriptor)).ToList();

    if (joinedFilters != null)
    {
      AddFilters(joinedFilters, filters.ActionFilters);
      AddFilters(joinedFilters, filters.ExceptionFilters);
      AddFilters(joinedFilters, filters.AuthorizationFilters);
      AddFilters(joinedFilters, filters.ResultFilters);
    }

    return filters;
  }

  private void AddFilters<T>(IEnumerable<IJoinedFilter> joinedFilters, IList<T> filters)
  {
    var orderedFilters = joinedFilters.OfType<T>()
      .OrderByDescending(f => f is IFilterPriority ? (f as IFilterPriority).GetOrder() : int.MaxValue)
      .ToList();

    orderedFilters.ForEach(filters.Add);
  }

Now, we can set the priority of our mapping filter to 1, ensuring it’s OnActionExecuted is executed first. All other filters in the JoinedFilter project are given an order of Int32.Max by default. This might not be the best way to sort, so watch out for updates in the future. This seems to work well with action filters and exception filters.

 public class OnViewResult_IfModelDoesNotMatchViewModelThenMapWithAutoMapper :
    OnViewResult_ExecuteActionFilter<ReflectedAutoMapFilter>
  {
    public override int GetOrder()
    {
      return 1;
    }
  }

Prioritizing exception aspects

Exception filters are another good example where priority becomes important. I have created a sample set of exceptions NestedException inheriting from ExceptionBase. I added two methods to the PriorityController sample:

 public class PriorityController : Controller
  {
    ...
    public void NestedException()
    {
      throw new NestedException();
    }

    public void ExceptionBase()
    {
      throw new ExceptionBase();
    }
  }

  public class ExceptionBase : Exception
  {
  }

  public class NestedException : ExceptionBase
  {
  }

What if we want one handler to catch specific exceptions of type NestedException and another to handle the more general exception ExceptionBase? Without priority, we are at the mercy of JoinedFilterLocator and whatever mechanism it relies on to acquire JoinedFilters. The sample below implements two handlers to send users to an error view with a message reporting what exception handler dealt with the error:

 public class NestedExceptionHandler : ExceptionHandler<NestedException>
  {
    public override int GetOrder()
    {
      return 1;
    }
  }

  public class ExceptionBaseHandler : ExceptionHandler<ExceptionBase>
  {
    public override int GetOrder()
    {
      return 0;
    }
  }

  public abstract class ExceptionHandler<T> : IExceptionFilter, IJoinedFilter, IFilterPriority
    where T : Exception
  {
    public void OnException(ExceptionContext filterContext)
    {
      var exception = filterContext.Exception as T;
      if (exception == null || filterContext.ExceptionHandled)
      {
        return;
      }

      filterContext.Result = ErrorView();
      filterContext.ExceptionHandled = true;
    }

    private ViewResult ErrorView()
    {
      var result = new ViewResult
                   {
                    ViewName = "Error"
                   };
      result.ViewData["Message"] = string.Format("{0} exception handler caught this", typeof (T));
      return result;
    }

    public bool JoinsTo(ControllerContext controllerContext, ActionDescriptor actionDescriptor)
    {
      return true;
    }

    public abstract int GetOrder();
  }

Run the sample yourself and try changing the priority to see how the ExceptionBaseHandler and NestedExceptionHandler work. Notice how this allows control over exception handlers from most specific to most general, just like with try/catch statements.

As usual, all code is available at my Google code repository for IJoinedFilter, this set of changes was wrapped up with commit 56. Please leave me some feedback about any enhancements, specifically I am looking for better ideas to deal with priority of filters, the integer thing kind of bothers me :).

Update: I am thinking that another nice way to do priority would be to use an explicit configuration mechanism, a lot like FubuMvc has so the filters being used are explicit and the order listed is the order applied. Thoughts?

kick it on DotNetKicks.com Shout it

-Wes

12.11.2009

IJoinedFilter

We have been using ASP.Net MVC for a few projects at work and the standard set of cross cutting concerns are popping up, as usual. A lot of samples exist to create filters for the scenarios (logging, exception handling, mapping, output transformations etc). We have been using many of these and they are adding a lot of excitement to the development process.

However, I smell a bit of a problem with all these attributed filters. The smell of configuration over convention. Configuration isn’t always a bad thing, it’s actually pretty useful when you have niche situations, but when you have repetitive, cross cutting concerns that need the application of the same filters, it gets to be a burden to remember what to apply. The MVC framework is designed with just the right amount of convention over configuration in several other areas, I felt like it was time to do the same for applying filters.

Stealing a page out of the playbook of interception, I felt it was time to design filters to join to actions/controllers when certain criteria are met. Filters themselves are interceptors, so naturally, a filter could define what it joins to, it’s join point (from the interceptor world). This way we have filters apply themselves to actions instead of vice versa! This would be great, for example, to create standard sets of exception filters and have them attached to all actions, no need for someone to decorate every controller and no chance they would forget! The same is true for any type of filter (action, result, authorization or exception).

Note: I also see a need for some SRP with join points and the interceptors and will look into taking existing filters and applying a join point to attach them, instead of the filter needing to define it’s join points.

IJoinedFilter

So enough with the motivation, and on to how this works. IJoinedFilter is an interface to define how a filter should apply to an action and the context:

public interface IJoinedFilter
{
  bool JoinsTo(ControllerContext controllerContext, ActionDescriptor actionDescriptor);
}

The method JoinsTo determines if the filter should apply to the current action being executed. Both the controller context and action descriptor are passed to allow for coarse to very fine grained application. Before I explain how to wire up the infrastructure, let's look at an example of how this can be used. Let's say I have a simple controller with two actions, About and World:

public class HomeController : Controller
{
  public ActionResult About()
  {
    var data = new
               {
                message = "about"
               };

return Json(data);

}

public ActionResult World() { var data = new { message = "world" };

return Json(data);

} }

To keep this simple, they both just return json results. The output right now looks like the following:

image image


Now, let’s say I want to use a filter to change my output on the World action to return “Hello world”. Normally I would create a new ActionFilterAttribute and manually apply this attribute:

public class HelloWorldFilter : ActionFilterAttribute
{
  public override void OnActionExecuted(ActionExecutedContext filterContext)
  {
    var result = new JsonResult
    {
      Data = new
      {
        message = "Hello World!"
      }
    };

filterContext.Result = result;

} }

public class HomeController : Controller { public ActionResult About() { var data = new { message = "about" };

return Json(data);

}

[HelloWorldFilter] public ActionResult World() { var data = new { message = "world" };

return Json(data);

} }

HelloWorldFilter

But now, with joined filters I can avoid the step of applying the attribute! Instead I can simply implement IActionFilter with IJoinedFilter to get the same result:

public class HelloWorldFilter : IActionFilter, IJoinedFilter
{
  public bool JoinsTo(ControllerContext controllerContext, ActionDescriptor actionDescriptor)
  {
    return actionDescriptor.ActionName == "World";
  }

public void OnActionExecuting(ActionExecutingContext filterContext) { }

public void OnActionExecuted(ActionExecutedContext filterContext) { var result = new JsonResult { Data = new { message = "Hello World!" } };

filterContext.Result = result;

} }

imageJoinsTo is set to only apply to actions with a name of "World". Now, I can replace my World action with a result of "Hello World!" by simply joining to the specific action I want (convention) instead of attributing it (configuration). This is a rather “cheesy” example but I am working on a subsequent blog post to show a few really cool, practical examples!

Infrastructure

Now for how the magic happens. I am going to need to find a spot to discover the joined filters and apply them to an action. The best spot to do this is to create a custom action invoker. I started with the WindsorControllerFactory in MVCContrib, so I also get IoC while I am at it :). This requires overriding the GetControllerInstance. If the container has an IActionInvoker registered, then resolve it and use it instead of the default action invoker.

public class ExtendedWindsorControllerFactory : WindsorControllerFactory
{
  public ExtendedWindsorControllerFactory(IWindsorContainer container) : base(container)
  {
    Container = container;
  }

protected IWindsorContainer Container { get; set; }

protected override IController GetControllerInstance(Type controllerType) { var controller = base.GetControllerInstance(controllerType) as Controller;

if (Container.Kernel.HasComponent(typeof (IActionInvoker)))
{
  controller.ActionInvoker = Container.Resolve&lt;IActionInvoker&gt;();
}

return controller;

} }

Now that I can inject an IActionInvoker, it is time to make one that can find my dynamic filters! I am calling this a LocatorActionInvoker as it resolves a list of IFilterLocator. Each one of these could find filters in it’s own way, this is just for SRP. For each locator the invoker will give it the controller context and the action descriptor and ask it to return filters (FilterInfo). It merges the results of each IFilterLocator into the set of filters to use!

public interface IFilterLocator
{
  FilterInfo FindFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor);
}

public class LocatorActionInvoker : ControllerActionInvoker { protected IWindsorContainer Container;

public LocatorActionInvoker(IWindsorContainer container) { Container = container; }

protected override FilterInfo GetFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor) { var filters = base.GetFilters(controllerContext, actionDescriptor);

var filterFinders = Container.ResolveAll&lt;IFilterLocator&gt;();

var foundFilters = filterFinders.Select(f =&gt; f.FindFilters(controllerContext, actionDescriptor));

foundFilters.ForEach(f =&gt; AddFilters(filters, f));

return filters;

}

private void AddFilters(FilterInfo filters, FilterInfo mergeFilters) { mergeFilters.ActionFilters.ForEach(filters.ActionFilters.Add); mergeFilters.ExceptionFilters.ForEach(filters.ExceptionFilters.Add); mergeFilters.AuthorizationFilters.ForEach(filters.AuthorizationFilters.Add); mergeFilters.ResultFilters.ForEach(filters.ResultFilters.Add); } }

Now I need to implement IFilterLocator to find my IJoinedFilters:

public class JoinedFilterLocator : IFilterLocator
{
  private IWindsorContainer Container;

public JoinedFilterLocator(IWindsorContainer container) { Container = container; }

public FilterInfo FindFilters(ControllerContext controllerContext, ActionDescriptor actionDescriptor) { var filters = new FilterInfo();

var joinedFilters = Container.ResolveAll&lt;IJoinedFilter&gt;()
  .Where(i =&gt; i.JoinsTo(controllerContext, actionDescriptor)).ToList();

if (joinedFilters != null)
{
  joinedFilters.OfType&lt;IActionFilter&gt;().ForEach(filters.ActionFilters.Add);
  joinedFilters.OfType&lt;IExceptionFilter&gt;().ForEach(filters.ExceptionFilters.Add);
  joinedFilters.OfType&lt;IAuthorizationFilter&gt;().ForEach(filters.AuthorizationFilters.Add);
  joinedFilters.OfType&lt;IResultfilter&gt;().ForEach(filters.ResultFilters.Add);
}

return filters;

} }

JoinedFilterLocator uses the container to resolve a list of IJoinedFilter. It filters this list for only filters that apply to the given ActionDescriptor and ControllerContext using the IJoinedFilter.JoinsTo method. If any filters match, it returns them in a new FilterInfo instance, which will be merged in LocatorActionInvoker with the rest of the filters.

Registration

That is it for custom components for the infrastructure of joined filters. All that is left is to configure the application to use the components and to register my components. First I add a container to my application:

public class MvcApplication : HttpApplication
{
  public static IWindsorContainer Container;

Then, in the startup of the application I initialize my container and register my components:

protected void Application_Start()
{
  RegisterRoutes(RouteTable.Routes);
  if (InitializeContainer())
  {
    RegisterControllers();
    SetControllerFactory();
    SetActionInvoker();
    RegisterJoinedActionFilters();
  }
}

Initialize container sets up a Windsor container for the application:

private bool InitializeContainer()
{
  lock (_lock)
  {
    if (Container != null)
    {
      return false;
    }
    Container = new WindsorContainer();
    Container.Register(
      Component.For<IWindsorContainer>()
        .Instance(Container)
        .LifeStyle.Singleton
      );
  }
  return true;
}

If the container is being initialized for the first time, then I register components. RegisterControllers just scans for controllers. SetControllerFactory registers ExtendedWindsorControllerFactory, resolves and sets it as the controller factory for MVC to use.

private void SetControllerFactory()
{
  Container.Register(Component
                      .For<IControllerFactory>()
                      .ImplementedBy<ExtendedWindsorControllerFactory>()
                      .LifeStyle.Transient);

var factory = Container.Resolve<IControllerFactory>();

ControllerBuilder.Current.SetControllerFactory(factory); }

SetActionInvoker registers my LocatorActionInvoker:

private void SetActionInvoker()
{
  Container.Register(Component.For<IActionInvoker>().ImplementedBy<LocatorActionInvoker>()
                      .LifeStyle.Transient);
}

Finally, RegisterJoinedActionFilters registers my JoinedFilterLocator and scans for IJoinedFilter types. You may want to change how this scans based on your project structure, in my simple example I have filters in my MVC app (bad practice but great for samples :)).

private void RegisterJoinedActionFilters()
{
  Container.Register(
    Component.For<IFilterLocator>().ImplementedBy<JoinedFilterLocator>().LifeStyle.Transient
    );

Container.Register( AllTypes.Of<IJoinedFilter>() .FromAssembly(Assembly.GetExecutingAssembly()) .ConfigureFor<IJoinedFilter>(c => c.LifeStyle.Transient) ); }

That is all for now, pretty easy way to setup joined filters and start creating convention based, joined filters instead of manually configuring them via attributes!

Download Sample

If you want to download the sample and try it out, feel free. You probably want an in browser json viewer with this sample, I like using JSONView with FireFox.

Note: due to issues building the latest MVCContrib against MVC 1.0 and/or getting castle version mismatches to work and me being lazy, I simply copied the WindsorControllerFactory into my project for now, cheap yes, but hey my time isn’t unlimited :)

-Wes

11.25.2009

Remove<T>(this IList<T> source, Func<T,bool> selector) why not?

Maybe I am just crazy, but it seems like removing or deleting items from a collection is always an after thought. Take IList for example, a list of items, with the ability to add and remove from it. We have a flurry of extension methods that are inherited from IEnumerable to add items but it seems like no one thought maybe it would be nice to beef up the remove method with some extension methods. Maybe I missed an extension namespace, maybe I am just crazy. How many times do we have to write the following bloated code just to remove something from a collection?

var ingredient = Ingredients.FirstOrDefault(i => i.FeedId == feed.Id);
Ingredients.Remove(ingredient);

Even worse, is when we want to remove multiple items from a list based on some criteria. I have to add a foreach loop of some sort to remove each item one at a time. Oh and, deal with the always wonderful: (System.InvalidOperationException: Collection was modified; enumeration operation may not execute). So, I have to remember to create a new list to avoid this dreadful mistake:

var ingredients = Ingredients.Where(i => i.FeedId == feed.Id).ToList();
foreach (var ingredient in Ingredients)
{
  Ingredients.Remove(ingredient);
}

Of course, now that I have created a new list, ToList(), I can simplify this to:

var ingredients = Ingredients.Where(i => i.FeedId == feed.Id).ToList();
ingredients.ForEach(i=> Ingredients.Remove(i));

But that is still icky, all that code just to remove an item or a set of items? So enough complaining on my part, it's time to put up or shut up. This is what I want to do in code:

Ingredients.Remove(i => i.FeedId == feed.Id);

Simple, right? And to do this here is the simple extension method:

public static void Remove<T>(this IList<T> source, Func<T, bool> selector)
{
  if (source == null || selector == null)
  {
    return;
  }

var itemsToRemove = source.Where(selector).ToList(); itemsToRemove.ForEach(i => source.Remove(i)); }

Now, I no longer have to worry about all these things when I want to remove items based on searching my list:

  • Doing a separate search for the items I want to remove
  • Creating a new list, avoiding the pitfall of enumerating when removing and getting "Collection was modified" exceptions.
  • Iterating over those items and calling remove on each

Some purists out there would be mad that I just quietly allow the source/selector to be null and not throw an exception. That's how I chose to implement this, if you want different behavior that's great. Just make sure you add this extension method to your arsenal! Maybe another RemoveSingle that throws an exception if it finds 0 or more than 1 item would be appropriate to add as well?

Happy coding!

10.5.2009

OrderBy().Descending()

Just wanted to share a quick extension method, it's really simple yes but it's power is in reducing lines of code. If you have ever wanted to apply an order by clause to a collection of items and conditionally do this based on a direction, you know that the only choices available are different methods OrderBy and OrderByDescending. It really is too bad because the internal OrderedEnumerable has a boolean flag for direction that would have been nice as a parameter, but better yet why not make it fluent while we are at it!

Normally we might have to write code like this, assuming I want to order by a particular key in a dictionary field on the item record. (This is just a sample of a nasty selector for ordering that we wouldn't want to be copy & pasting):

if(ascending)
{
  items = items.OrderBy(i => i.DictionaryField.Keys.Contains(key) ? i.DictionaryField[key] : null)
}
else
{
  items = items.OrderByDescending(i => i.DictionaryField.Keys.Contains(key) ? i.DictionaryField[key] : null)
}
The next refactoring might be, which might not be so bad if we could use var instead of Func..., but c# doesn't deal well with implicit functions because of the static typing thing!

Func<Item, object> nastySelector = i => i.DictionaryField.Keys.Contains(key) ? i.DictionaryField[key] : null;
if(ascending)
{
  items = items.OrderBy(nastySelector);
}
else
{
  items = items.OrderByDescending(nastySelector);
}
Next, I can combine the if/else using the ternary operator:

Func<Item, object> nastySelector = i => i.DictionaryField.Keys.Contains(key) ? i.DictionaryField[key] : null;
items = ascending ? items.OrderBy(nastySelector) : items.OrderByDescending(nastySelector);
This is pretty good, but it's not very readable. A chained Descending method (on OrderBy results) would be nice, ridding us of the Func declaration garble and even making our items assignment much more fluent!

ordered = items.OrderBy(i => i.DictionaryField.Keys.Contains(key) ? i.DictionaryField[key] : null);
items = ascending ? ordered : ordered.Descending();
So here is the Descending implementation, this is for Enumerable lists only, if you have an IQueryable collection, cast it to IEnumerable first. This makes use of reflection to set the internal field (descending) on the internal class that you are eternally not supposed to touch :).

public static IOrderedEnumerable<TSource> Descending<TSource>(this IOrderedEnumerable<TSource> source)
{
  var field = source.GetType().GetField("descending", BindingFlags.NonPublic | BindingFlags.Instance);
  if(field == null)
  {
    throw new ArgumentException("Source must be OrderedEnumerable");
  }

field.SetValue(source, true);

return source; }

10.2.2009

ForEach or ForEachCopyIntoNewList?

All of us have desired a ForEach extension method in .Net for a while now, after being spoiled with all the new syntactic sugar with lambdas and linq in c#. We've no doubt all implemented our own, here is the one I use:

public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
{
  foreach (var item in source)
  {
    action(item);
  }
}
My only issue with this and the foreach loop itself, is that you cannot modify the original collection with your action. There are plenty of cases where we only have a Remove method on a collection and would like to have a RemoveAll. To get around this issue, we can copy items into a new list and iterate over it. With this we can even remove items from the original collection! However, I am now wondering if this should be the default behavior of a ForEach extension method:

public static void ForEachCopyIntoNewList<T>(this IEnumerable<T> source, Action<T> action)
{
  var items = source.ToList();
  items.ForEach(action);
}
I am wondering what everyone thinks, obviously this has implications for delayed execution / lazy loaded scenarios but with that aside, thoughts? I am also looking for a good name to keep this as an alternative extension method but ForEachCopyIntoNewList is rather icky, so if you have a suggestion please let me know.

9.6.2009

Weighted Average in python

  def WeightedAverage(items, value, weight):
    numerator = sum(value(i) * weight(i) for i in items)
    divisor =  sum(weight(i) for i in items)
    return (numerator / divisor) if divisor != 0 else None
8.21.2009

My cool code snippet

I submitted this to the Resharper cool code snippet competition and thought I would share it with everyone else, including why it’s cool!

[Test]
public void $METHOD$$SCENARIO$$EXPECTATION$()
{
  $END$
}
I’ve been doing a lot of TDD lately and I am constantly writing new tests. So my number one snippet is one that creates a simple test. I’ve tweaked this template a lot, it started out with a simple test template with only one editable occurance for a name. In “The Art of Unit Testing,” Roy Osherove talks about test readability which starts with the test name itself. He suggests putting the test method name together with three components. First, name the method under test (the Act in AAA), that way related tests for the same method are easy to identify (and actually R# sorts test fixture test methods by name so this is even more useful with Code Cleanup :) ). Second, name it with a scenario, what is unique about the given context of the test (known as the Arrange in AAA). Finally, include the outcome of the test, the expectation (Assert in AAA). This way you know all of what will go into the test before you write it!

I gave his naming scheme a try and found out it offers a pit of success. If you cannot name these three parts then your test likely won’t be readable. Anyone should know what a test does without reading it, like with any method, it should be intention revealing! Without identifying a scenario explicitly, it may be blurred in several lines of setup (Arrange). Having the scenario in the name makes it explicit and makes me keep it simple (I hate long method names). The test may have too many expectations, again, I hate long method names so if I find myself putting multiple expectations in the name I will quickly realize I need to move the other expectations to new tests (SRP with testing)!

The same day I tried using this test method naming convention, was the same day I created this test snippet and have kept it ever since. Every time I write a test, I am very explicit and careful, which is very important with TDD, not to take on too much with any one test!

This snippet is awesome, not because of the code it creates, but because it sets myself up for success every time I use it, like having Uncle Bob Martin watching over my shoulder :)

8.4.2009

Ling2Sql custom mapping gotcha with entity base class

If you are doing custom Linq to Sql mappings, it currently doesn’t support having your own base Entity class that you extend all your entities from. I found out the hard way through painful debugging. However, it seems if you do this with the version of .net 3.5 sp1 on the Win 7 RC it will work, so this may not be a bug forever!