Posts from  July 2013


7.23.2013

Adding OWIN to an empty ASP.Net MVC app

I've been playing around with the new ASP.Net MVC 5 samples and wanted to share how you can quickly inject OWIN into an empty MVC app and for that matter any ASP.Net app. I'll admit I haven't found a ton of documentation about this yet so this is just me hacking around and wanting to share what I found.

  1. Start out by making an empty MVC project.
  2. Install Microsoft.Owin.Host.SystemWeb
    • Install-Package Microsoft.Owin.Host.SystemWeb -Prerelease
    • This has a type OwinHttpModule that gets loaded dynamically via Microsoft.Web.Infrastructure that "injects" the OWIN infrastructure.
  3. Add a Startup class to configure OWIN
    • This is where you configure your OWIN middleware, this is by convention
          public class Startup
          {        
              public void Configuration(IAppBuilder app)
              {
              }
          }
  4. Do something with OWIN
    • Install-Package Owin.Extensions
      • Nice extension methods to simplify the verbosity of creating middleware
    • I found these ideas in Getting Started With OWIN, Katana, and VS2013
    • Something simple like this can interrupt the process and return a result from OWIN middleware app.UseHandler((request, response) => response.WriteAsync("OWIN Hello World"));
      • This is very similar to the style you'll find in nodejs express apps.
    • This will allow the request to pass through the OWIN infrastructure and be handled by MVC but it will tack on a header to the result.
      app.UseHandler((request, response, next) =>
      {
          response.AddHeader("Hello", "World");
          return next();
      });
7.16.2013

Why I value SmartGit/Hg

There certainly are lots of great tools for working with git. I've been using SmartGit/Hg for years to help visualize aspects of git repositories, commits and making changes. Here's a list of how SmartGit/Hg adds value to my development process:

Note: the official name of the product is now SmartGit/Hg, however I'm focusing only on the git portion of it.

The Index Editor

  • A 3 way view of both HEAD, index and working tree!
  • I can manually edit the index and the working tree.
  • Each highlighted difference can be moved to the index or out of the index with shortcut keys or mouse clicks on the "x" and "<<" icons.
  • It helps me keep my mind around what's in the index versus the working tree.

Outgoing commits

  • Fantastic view of commits I haven't pushed remotely (shows this based on the current branch and it's remote tracked branch).
  • I can right click to join commits, edit commit messages and reorder commits all without having to remember the intricate commands involved in an interactive rebase :)
  • And, it prevents me from changing pushed commits!

Committing

  • Prior to commit I can use short cut keys to step through changes across all files. This makes commit reviews very effective. Plus, I am using the advanced visual diff provided by SmartGit/Hg, not a unified diff of lines added/removed or otherwise.
  • Amending the last commit -> it's just a check-box (or shortcut key) on commit and it pulls in the last commit message, very slick!
  • Shortcut key to amend last commit message, F2 in Windows which seems natural :)
  • Shortcut key to undo last commit (soft reset), Ctrl+Shift+K in Windows, the "opposite" of committing Ctrl+K

Fantastic log

  • Fantastic log tool with the ability to turn branches on/off, show lost commits, remote branches and stashes all in one spot. Also, I can easily look at the log of individual files and directories.
  • Lost Heads shows unreachable commits so if I make a mistake I can get commits back. This also helps me understand how some things like a rebase in git work, by seeing the original versus new commits.

  • Merging, cherry picking, reset and rebase are as simple as right clicking a commit in the log. No need to mess with the myriad of commit references (shas, branches etc). I can focus on what not how.

Fantastic diff and merge tools

  • The diff tool goes above and beyond most diff tools to show what code was removed, added, changed etc with different coloring and highlighting that flows across diff panels, even when scrolling through changes.
    • See the swooshes indicating things removed from the index in purple.
    • Things added in the working tree as green.
    • Pink for changes to a line.
    • All the little "x" "<<" ">>" actions to undo these changes.
  • Can quickly switch diff from HEAD v Index to Index v Working Tree.
  • When I hit a merge conflict I can quickly can step through a three way merge of each conflicted file.
    • For each conflicting hunk of a file I can choose which way changes should flow and use short cut keys to make corrections in a very visual manner which I personally think makes resolving merge conflicts much less of a hassle.
    • I get the same goodness of "x" "<<" ">>" actions like in the diff above.

Concise file status view

  • Helps track changes to both the index and working tree and see their state at a glance.
  • It automatically updates to reflect changes so I can quickly glance at what files I've changed.
  • I don't have to worry about paths and complex command line status output.
  • I can sort index state in the status view to show conflicted files at the top in a merge conflict to quickly resolve things.
  • File filters to show/hide ignored files, unchanged files, new files and staged files.
  • Can also search for files by name, as I type it updates.

Customizable Interface

  • Each window within the interface can be customized for placement and size, allowing me to focus on the things that I use most frequently.
  • There are two views too, Main and Review that each can be customized separately. I use main to work with my repository in general (logs, branches, outgoing commits) and I use review to work on a particular commit(s) (message, diff, changed files).

Abstracts away SVN :)

  • Will clone the svn repository and setup a local git repository
    • Naturally, my work is rebased on integrating with the central SVN repository
  • All the power of git for local commits without manually setting this up

Abstractions Apply to Hg

  • Seamlessly work with an Hg repository, can forgot most of the nuances between git and Hg.
  • All the interactive rebase shortcuts, that work with git, work with Hg too (squash, drag and drop reorder, edit messages)
    • SmartGit/Hg takes care of configuring the appropriate Hg extensions!
    • I can focus on what, not how, of cleaning up and organizing my local commits before pushing

Misc

  • Ctrl+Z in Windows to discard changes in a file(s), seems very natural to me :)
  • Stage/unstage buttons front and center in the UI, no need to remember tedious CLI operations and pathing to stage/unstage files
  • gravatar to show pictures of authors :)
  • Integration with lots of hosting providers, quickly clone repositories
  • Fully customizable shortcut keys.
  • SmartGit/Hg is constantly being updated with great new features.

Conclusion

I personally prefer both the CLI and SmartGit/Hg when working with git repositories. I find that using SmartGit/Hg lets me focus more on the concepts of git and less on the specifics of executing commands. SmartGit/Hg provides nice visual confirmation of what I'm doing while providing all the power of keyboard control.

7.10.2013

Pattern: Value Driven Documentation

Applicability

All the time.

Definition

Documentation is a feature and thus runs the risk of being vestigial. Make sure value drives documentation requirements.

Mechanics

First I would suggest, never add documentation until you know why you need it. Here is why:

  • I've had plenty of experience creating documentation that no one ever looked at.
  • Documentation must be maintained. If the software is updated, the documentation must be too, or it will cease to be useful.
  • If the documentation isn't stellar, people won't use it.
  • Documentation is often lost if it's not embedded into the software.
  • Increasing volume of documentation decreases the utility of the documentation, readers are likely to be overwhelmed and give up.

Consider the value first. In the case of documentation, it's probably best to start with none and later add it after you have assessed the costs of not having it (training/support). If you will have thousands of users, then you may know what the value could be and in this case it would be prudent to discuss documentation upfront. If you start the conversation with "What would the value of documentation be?", instead of "We need X documentation!", you are more likely to minimize the documentation you create and target it to achieve maximal value. Use the value you hope to achieve as a filter to limit the documentation.

Also, make sure you put yourself in the audience's shoes. Ask them for input and make sure you are thinking about their actual needs versus your perception of their needs. As with any communication, you run the risk of failing to communicate!

Tips

  • Default to no documentation.
  • Only add documentation after you have identified the desired value.
  • Focus on complexity, don't document simple things.
  • Think of documentation as one of many ways to communicate.
  • Consider the value of a more intuitive interface versus documentation.
  • Consider the value of training versus documentation.
  • Read Martin Fowler's take on documentation of the system internals The Almighty Thud
  • One suggestion, the most successful documentation I've been involved with was very task oriented and geared towards tasks the users were already very familiar with.
7.2.2013

Part 4 - Testing with ddfplus - Reactive benefits with commodity option barrier events

In Part 3 I introduced integrating with Barchart Market Data Solutions real-time market data feeds (ddfplus quote service) for equities, indices, futures and foreign exchange markets. In this post I'll integrate the testing and separation from Part 2.

For reference:

Using a real system means I can introduce separation and testing based on real needs instead of the arbitrary considerations in Part 2. When a ddfplus client connection occurs, a set of what I call "uninitialized" sessions are returned if there's no data for a given symbol in the current session. These quotes will have the symbol and the current session will have zeros for Open/High/Low/Close and a Timestamp of default(DateTime). For my use case of monitoring commodity option barrier events, I don't need to know about these so I want to devise a way to exclude them.

Where to start

I could just write the following code:

private static IObservable<ParsedDdfQuote> CreateQuoteStream(Client client)
{
    return Observable
        .FromEventPattern<Client.NewQuoteEventHandler, Client.NewQuoteEventArgs>(h => client.NewQuote += h, h => client.NewQuote -= h)
        .Select(e => e.EventArgs.Quote)
        .Where(e => e.Sessions["combined"].High > 0)

I could even extract the a method to leave some intent:

.Select(e => e.EventArgs.Quote)
.Where(IsInitialized)

private static bool IsInitialized(Quote quote)
{
    return quote.Sessions["combined"].High > 0;
}

Now I wonder, how do I test this? I could fire up my console example and verify that the uninitialized sessions are filtered out. But what if I can't find any that are uninitialized right now? What if I can but they become initialized when I do my manual inspection?

The reality is I can't control what the market is doing to verify this code works so I need another way. Reproducibility is absolutely central to testing. Maybe this example is simple enough you might choose to just trust it works. I know from experience that a few simple tests can isolate and verify this behavior and they will pay dividends immediately.

How to reliably test this

Naturally I'd like to setup a test case with an uninitialized session, something like:

var uninitializedQuote = new Quote();
uninitializedQuote.Symbol = "CZ2013";
var uninitializedSession = new Session();
uninitializedQuote.Sessions["combined"] = uninitializedSession;
uninitializedSession.High = 0;
...

This is working directly against types in the ddfplus library. There are many reasons this isn't a good idea, chief of which is the above code won't even compile because the external API is encapsulated. Therefore, I want to introduce a layer of separation to inject my testing strategy.

This separation is how I approach working with any external system. Even if the above code were to compile I would still use separation to reduce the surface area of an external system's interaction with my own. Here's why:

  • I'll learn things about the external system that I can make them explicit in this layer of separation.
  • I'll make my assumptions explicit so someone else (my future self included) can challenge them.
  • I want reproducible tests.
  • I want to test my interactions with external systems.
  • I want to simulate external interactions, which is especially valuable with market data where it may take weeks or months for a given condition to occur.
  • I want confidence in what I'm building
  • I want to have a conversation with others and tests are a great place to start.
  • Less so, but a benefit nonetheless - the external API may be updated and I want to minimize the impact.

Unit Testing Projections

The first layer of separation I can introduce is a simple projection from the API type to a type I control. A projection is like an adapter to abstract interactions with a type you don't control with one you do. Once I have this translation tested, I can build further testing on top of it. I'm going to call this projection ParsedDdfQuote.

I'm still stuck with the issue that I can't create the API types. However, in the limited case of testing my projection I am willing to use reflection to get around encapsulation. Here's my first test to map high and low, two things I need to monitor barrier events:

Project high and low

[Test]
public void Create_FromQuote_MapsHighAndLow()
{
    var quote = new Quote();
    var session = new Session();
    quote.AsDynamic().AddSession("combined", session);
    session.AsDynamic().High = 2;
    session.AsDynamic().Low = 1;

    var parsed = new ParsedDdfQuote(quote);

    parsed.High.ShouldBeEquivalentTo(2);
    parsed.Low.ShouldBeEquivalentTo(1);
}

  • Setup
    • Create a quote and a session
    • Add the session to the quote
      • using reflection via AsDynamic, an extension method from ReflectionMagic a package that simplifies using reflection
    • Set the high and low
  • Action
    • create the ParsedDdfQuote from the Quote
  • Assertions
    • high and low are mapped to the parsed projection

Here's the code to make it pass:

public class ParsedDdfQuote
{
    public ParsedDdfQuote(Quote quote)
    {
        var combinedSession = quote.Sessions["combined"];
        High = Convert.ToDecimal(combinedSession.High);
        Low = Convert.ToDecimal(combinedSession.Low);
    }

    public decimal High { get; set; }
    public decimal Low { get; set; }
}

Project symbol

I also need the symbol:

[Test]
public void Create_FromQuote_MapsSymbol()
{
    var quote = new Quote();
    AddCombinedSession(quote);
    quote.AsDynamic().Symbol = "CZ13";

    var parsed = new ParsedDdfQuote(quote);

    parsed.Symbol.ShouldBeEquivalentTo("CZ13");
}

private static Session AddCombinedSession(Quote quote)
{
    var session = new Session();
    quote.AsDynamic().AddSession("combined", session);
    return session;
}

This test is very similar to the last. I extracted AddCombinedSession to share the setup of the combined session.

The code to make it pass:

public class ParsedDdfQuote
{
    public ParsedDdfQuote(Quote quote)
    {
        ...
        Symbol = quote.Symbol;
    }

    ...
    public string Symbol { get; set; }
}

Detect if session is initialized

Finally, I'd like to embed my logic for detecting "uninitialized" sessions into my projection:

[Test]
public void IsInitialized_UninitializedQuote_ReturnsFalse()
{
    var uninitialized = new ParsedDdfQuote {High = 0};

    uninitialized.IsInitialized().Should().BeFalse();
}

  • Setup
    • Notice the explicit assumption about what uninitialized means! High = 0

The code to make it pass:

public class ParsedDdfQuote
{
    ...
    public bool IsInitialized()
    {
        return High > 0;
    }
}

That's it, we now have a simple, tested projection in place!

Inject separation

Now I want to integrate this with my quote stream. I've simply chosen to put this into my ddfplusQuoteSource:

public readonly IObservable<ParsedDdfQuote> QuoteStream;

private static IObservable<ParsedDdfQuote> CreateQuoteStream(Client client)
{
    return Observable
        .FromEventPattern<Client.NewQuoteEventHandler, Client.NewQuoteEventArgs>(h => client.NewQuote += h, h => client.NewQuote -= h)
        .Select(e => e.EventArgs.Quote)
        .Select(q => new ParsedDdfQuote(q));
}

This encapsulates all interactions with the Quote type inside of my ddfplusQuoteSource. Any smoke testing I do of creating this observable would suffice to verify any logic in this integration. I can rely on the Rx framework's testing of Select and my tests of ParsedDdfQuote for the rest.

Testing filtering in the stream

Now that I am working with my own type, I can verify my filtering behaves as I expect. I've decided to implement this filtering as an extension method on IObservable<ParsedDdfQuote>. This way I can compose this into any consumer code. This is my use case:

var source = new ddfplusQuoteSource();
source.QuoteStream
      .ExcludeUninitializedQuotes()
      .Subscribe(PrintQuote);

Now to test this. Filtering is obviously a simple technique, and with my abstracted ParsedDdfQuote.IsInitialized it's even less risky. I will argue at times that testing a well encapsulated method like IsInitialized is sufficient. Nonetheless, I want to demonstrate what will become very useful as we do more advanced things with this stream of quotes. Here's my test method, very similar to Part 2.

[Test]
public void ExcludeUninitializedQuotes()
{
    var uninitializedQuote = new ParsedDdfQuote {High = 0};
    var scheduler = new TestScheduler();
    var quotes = scheduler.CreateHotObservable(ReactiveTest.OnNext(201, uninitializedQuote));

    var onlyInitialized = quotes.ExcludeUninitializedQuotes();
    var quotesObserver = scheduler.Start(() => onlyInitialized); // overload 

    quotesObserver.Messages.Should().BeEmpty();
}

Note: For more details on virtual time scheduling with the reactive framework checkout Testing Rx Queries using Virtual Time Scheduling.

  • Setup
    • Create an uninitializedQuote with my new type ParsedDdfQuote
    • Create a test scheduler and a test observable stream of ParsedDdfQuote
      • Quotes, and other events are "hot" observables in that they fire off information regardless if anyone is subscribed.
      • Schedule my uninitializedQuote to fire at 201 ticks. Ticks are how time is simulated with the Rx-Testing. 201 is significant because by default, calls to TestScheduler.Start subscribe at tick 200.
  • Action
    • filter the quotes with ExcludeUninitializedQuotes
    • simulate and capture quotes into quotesObserver
  • Assertion
    • I shouldn't receive any messages (quotes)

The code to make it pass:

public static IObservable<ParsedDdfQuote> ExcludeUninitializedQuotes(this IObservable<ParsedDdfQuote> quotes)
{
    return quotes
        .Where(q => q.IsInitialized());
}

Conclusion

The reactive framework makes testing and composing interactions with market data streams a breeze. Leveraging the power of LINQ to separate what from how allows us to focus on adding business value, not implementing plumbing. Next, we'll look at more complexity and see how this declarative style of programming really pays off.