Posts in  testing

1.10.2014

C# async/await makes reactive testing expressive!

Isolating asynchronous behavior isn't always possible, especially when it comes to creating integration tests. However, with the new async/await features in c# 5, testing reactive interfaces is a breeze and can be very expressive.

FileSystemWatcher and events

FileSystemWatcher is commonly used to monitor for changes to a directory. It's been around since .Net 1.1. Whenever I work with event based interfaces, I prefer to wrap them with an Observable. Testing this often requires an integration test. Here's some code to convert the FileSystemWatcher.Changed event to an observable:

IObservable<FileSystemEventArgs> Changed = Observable
    .FromEventPattern<FileSystemEventHandler, FileSystemEventArgs>(h => Watcher.Changed += h, h => Watcher.Changed -= h)
    .Select(x => x.EventArgs);

I'm using this to create an ObservableFileSystemWatcher adapter. I'll be referring to this in the following tests.

The following code is captured in this gist, look at the revision history to see the changes between each of the following samples.

Designing a test

Historically, testing asynchronous behavior required blocking calls and hackery to capture information for our assertions. Here's how we might be tempted to start testing the observable wrapper:

[Test]
public void WriteToFile_StreamsChanged()
{
    using (var watcher = new ObservableFileSystemWatcher(c => { c.Path = TempPath; }))
    {
        FileSystemEventArgs changed = null;
        watcher.Changed.Subscribe(c =>
        {
            changed = c;
        });
        watcher.Start();

        File.WriteAllText(Path.Combine(TempPath, "Changed.Txt"), "foo");

        Expect(changed.ChangeType, Is.EqualTo(WatcherChangeTypes.Changed));
        Expect(changed.Name, Is.EqualTo("Changed.Txt"));
    }
}

To test the Changed observable, we Subscribe to Changed and then capture the last result into a local changed variable. This way we can run assertions on the changed notification. Next, we Start the watcher, create a new file, and verify we get a notification.

However, when we run this test, it fails. The file notification is asynchronous and thus non-deterministic. We don't know if it will happen before or after our assertions are executed.

Waiting for the result

Historically, we'd have to modify this test to wait for the changed notification. We could use a ManualResetEvent to wait:

[Test]
[Timeout(2000)]
public void WriteToFile_StreamsChanged()
{
    using (var watcher = new ObservableFileSystemWatcher(c => { c.Path = TempPath; }))
    {
        var reset = new ManualResetEvent(false);
        FileSystemEventArgs changed = null;
        watcher.Changed.Subscribe(c =>
        {
            changed = c;
            reset.Set();
        });
        watcher.Start();

        File.WriteAllText(Path.Combine(TempPath, "Changed.Txt"), "foo");

        reset.WaitOne();
        Expect(changed.ChangeType, Is.EqualTo(WatcherChangeTypes.Changed));
        Expect(changed.Name, Is.EqualTo("Changed.Txt"));
    }
}

The reset will block the test at the call to WaitOne just before our assertions. When the changed notification happens, Set will be called in our subscriber and the test will complete. To be safe, the test has also been modified to Timeout after 2 seconds.

Now our test works, but it's not pretty :(

Making it expressive with async/await

Thanks to the new c# 5 async/await language features, we can fix this. Here's the new way we can write this test:

[Test]
[Timeout(2000)]
public async Task WriteToFile_StreamsChanged()
{
    using (var watcher = new ObservableFileSystemWatcher(c => { c.Path = TempPath; }))
    {
        var firstChanged = watcher.Changed.FirstAsync().ToTask();
        watcher.Start();

        File.WriteAllText(Path.Combine(TempPath, "Changed.Txt"), "foo");

        var changed = await firstChanged;
        Expect(changed.ChangeType, Is.EqualTo(WatcherChangeTypes.Changed));
        Expect(changed.Name, Is.EqualTo("Changed.Txt"));
    }
}

Observables are awaitable, which means we can wait (non-blocking) for the next item. In this case, we can use FirstAsync().ToTask() to create a Task, that upon completion, will contain the first result of the observable sequence. This task is named firstChanged above. This task could complete at any time. Of course in this case, nothing will happen until we create the test file. We're free to continue setting up the test as we've already indicated the desire to capture the first result!

Next, we Start the watcher, create the test file and then the magic, we use await to wait for the result of our firstChanged task. Once the task is complete, the changed notification will be captured into our changed variable. I love how readable this is 'changed equals wait for the first changed'! Once the first changed notification arrives, the assertions will run and the test will complete.

Strive for Expressive Tests

The historical strategy works, but the new style is more compact and doesn't require so much brain power to understand. This new style makes this test much easier to write and much easier to maintain. Try this out and see how it improves your own testing, let me know what you come up with!

8.26.2013

Automated testing as a tool, not an obligation

Yesterday I stumbled upon a discussion about unit testing private functions. Many of the responses were very prescriptive (never do this, just say no). In other testing discussions I've heard many advocate 100% coverage via automated testing, an attempt to test everything. In general I find a lot of prescriptive guidance around testing.

Why do we test?

Whenever I experience prescriptive guidance, I can't help but defiantly ask why?

Why do we test?

  • to verify behavior
  • to build confidence
  • to document behavior
  • to tease out design (TDD)
  • and many others.

These reasons are the obligations that drive me to test. Testing to me is a means to these ends.

Automated testing as a tool

When I first got into automated testing, I jumped on the bandwagon of the test everything mentality. I actually recommend trying this if you haven't experienced it, nothing helps you see testing as optional faster than experiencing cases where it truly does just get in the way and adds no value.

Once I realized it can be a burden, I started to think of automated testing as a tool (an option, not an obligation). I started to keep track of which situations I felt it added value and which ones it just got in the way.

I found extreme value in automated testing of complex calculations, component logic, projections, aggregate behavior and interactions with remote systems. Other areas, especially integration layers like a controller (MVC) or service boundaries were often way more work than they were worth. In these cases I found focusing on simplicity and keeping logic out of these layers was a better approach.

The reality is there are other equally valid tools for the job. Reading code is another tool to verify behavior and to build confidence. So is manual testing. Sometimes these are just as effective as automated testing. Sometimes they aren't. Sometimes they are very complementary.

When treated as a tool, I quickly realized that many forms of automated testing gave me a new level of confidence I'd never had before. Some tests were saving me hours of time versus manually verifying behavior, even on the first iterations of development! And some tests were just not worth the hassle. Testing literally became a no brain-er for me in many situations.

Time and intuition have developed an almost innate ability to pick up and put down the "automated testing tool," instinctively knowing when it's adding value and when it's not.

Spreading the good word

Once, I saw the "light", I wanted to spread the good word. Unfortunately, in my haste, I fell prey to being prescriptive. Though I restricted my prescription to areas I felt a lack of confidence or a lack of verification, I failed to communicate it as such. As a result, I often noticed a lack of quality in the resulting tests: missing scenarios, rushed implementations, copy/pasting, readability issues etc. And when testing wasn't explicitly requested, there rarely was any.

In situations where I conveyed my concerns about confidence and verification, the quality of the tests dramatically increased.

Instead of asking for tests of X, Y and Z, if we ask "How do we verify X?" or "Are we confident of Y?" or "In what situations does Z apply?" we can weigh the pros and cons of using automated testing versus other tools to answer these questions.

If we look at automated testing as a tool, a tool we need to study, practice and perfect our usage of, I think people will pick up on the good word faster. If automated testing is optional, but people know the value of it, I bet we'd see more of it as a result, after all, it is a very powerful tool.

5.3.2013

Monitoring Tests - When This Runs Over

Martin Fowler made a reference to Monitoring Tests in his review of Gap Inc.'s SCMS PO System - testing. I really like the idea of formalizing monitoring as a type of testing. Monitoring involves an assertion like any other test. Why not automate the validation of that assertion as the system is running?

Premature optimization is wasteful

I prefer to avoid premature optimization and fussing over performance of systems. In my experience systems run magnitudes faster than necessary 99% of the time. Sometimes I have a section of code that I don't expect to take more than a specified amount of time and I use that assumption to design simplified, more reliable code. If that assumption is violated, it would be nice to know about it.

Reacting to disaster

I often find there is a threshold above my assumption that can be crossed and still not cause any problems. At that point though, it might be a good idea to start talking about optimization. In the worst case these assumptions are absent from code, in the best case they may be in a comment:

// Processing settlements shouldn't take more than 10 minutes, it could be an issue if it gets close to 30 minutes because we try to email statements an hour after processing starts.
ProcessSettlements();

Even in this case, going over an hour probably isn't a big deal. There's probably defensive coding in the statement module to abort or defer sending statements if settlements aren't complete. But who wants to wait an hour to find out there might be a problem?

Being Proactive - When This Runs Over

Why not step it up a notch and turn the comment into an executable assertion?

using (WhenThis.RunsOver(Settings.Default.SettlementsWarningDuration).Then(SendSettlementsWarning))
{
    ProcessSettlements();
}

private void SendSettlementsWarning(TimeMonitor monitor)
{
    // put whatever code to send the warning here, this just shows how you can use the constraints of the monitor to format the message
    var message = new
        {
            Subject = "Settlements Are Taking Longer Than Expected",
            Body = "Process started at " + monitor.Started + " and is expected to complete in " + monitor.NotifyAfter + " but it's " + DateTime.Now " and they are not yet complete.",
            //Recipients
        };
    // message.Send();
}

The duration is configurable so I can adjust it to avoid being pestered, but I can find out well before an hour if I have a problem!

Here's the full example with some tests.

Monitoring Tests defined

I would define Monitoring Tests as any test that validates a system as it runs and provides automated feedback to those that maintain the system.

Using AAA syntax, the components of these tests are:

  • Arrange: this already exists, it's the system itself!
    • ie: inputs to the ProcessSettlements method
  • Act: this is the scope we want to validate, which also already exists!
    • ie: the ProcessSettlements method
  • Assert: this is the only missing piece, it requires identifying missing assumptions and/or assumptions in comments and making them executable
    • ie: the consumer of the ProcessSettements method has constraints for how long it should take because it has to run statements later

Summary

There is immense value in making assumptions in a system executable and providing automated feedback. Time tracking, exception management, logging and performance counters are ways we already use this. Monitoring Tests are about having your system tell you about problems as soon as possible, preferably before they occur! Dip your toe in the water, find a framework to help, use a service online.

The value of formalizing monitoring as a type of test is that we begin to think explicitly in terms of the AAA syntax and methodically engineering what of this we automate and what we don't. Like other tests, we can evolve these to meet ever changing monitoring needs.

12.19.2009

Testing tips (That is testability, not TDD part 2)

This is a continuation of my last post That is testability, not Test Driven Development! In it, I analyzed an article on the ADO.Net team blog Walkthrough: Test-Driven Development with the Entity Framework 4.0. I wanted to take that analysis a step further and share some insight from the testing perspective itself. There are two tests to review to produce a list of testing tips.

Update: Some of these test refactorings won’t work in the existing sample as they take dependencies to NUnit, you will need to add that in if you want to run these tests.

  1. Don’t setup what we don’t need. Before the first test case was even outlined, a large amount of unnecessary setup was created, some of which was never used. Tip: only write setup specifically for the given test case, in other words write the test case first and then the setup.
    namespace BlogTests
    {
        [TestClass]
        public class CommentTests
        {
            private IBloggingEntities _context;

        [TestInitialize]
        public void TestSetup()
        {
            Person p = new Person()
            {
                fn = "Jonathan",
                ln = "Aneja",
                email = "jonaneja@microsoft.com",
                User = new User() { Password = "password123" }
            };
    
    
            Blog b = new Blog()
            {
                Name = "Entity Framework Team Blog",
                Url = "http://blogs.msdn.com/adonet",
                User = p.User
            };
    
            Post post = new Post()
            {
                ID = 1,
                Blog = b,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Title = "Walkthrough: Test-Driven Development in Entity Framework 4.0",
                User = p.User,
                User1 = p.User,
                PermaUrl = b.Url + "/walkthrough-test-driven-development-in-Entity-Framework-4.0.aspx",
                Body = "This walkthrough will demonstrate how to..."
            };
    
            _context = new FakeBloggingEntities();
    
            var repository = new BlogRepository(_context);
    
            repository.AddPerson(p);
            repository.AddBlog(b);
            repository.AddPost(post);
    
            repository.SaveChanges();
        }
    }
    

    }

  2. Focus on the unit that is being tested, do not test it indirectly. In this example, the first test is to validate that the same comment isn’t added twice, however I see no call to any validate method in the test case!
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void AttemptedDoublePostedComment()
            {
                var repository = new BlogRepository(_context);

            Comment c = new Comment()
            {
                ID = 45,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Body = "EF4 is cool!",
                Title = "comment #1",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            Comment c2 = new Comment()
            {
                ID = 46,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Body = "EF4 is cool!",
                Title = "comment #1",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            repository.AddComment(c);
            repository.AddComment(c2);
    
            repository.SaveChanges();
        }
    
    public class BlogRepository
    {
        private IBloggingEntities _context;
        ...
        public void SaveChanges()
        {
            _context.SaveChanges();
        }
    }
    
    public class FakeBloggingEntities : IBloggingEntities, IDisposable
    {
        public int SaveChanges()
        {
            foreach (var comment in Comments)
            {
                ((IValidate)comment).Validate(ChangeAction.Insert);
            }
            return 1;
        }
    }
    
    public partial class Comment : IValidate
    {
      ...
      void IValidate.Validate(ChangeAction action) {
            if (action == ChangeAction.Insert)
            {
                //prevent double-posting of Comments
                if (this.Post.Comments.Count(c =&gt; c.Body == this.Body &amp;&amp; c.Person.User == this.Person.User) &gt; 1)
                    throw new InvalidOperationException("A comment with this exact text has already been posted to this entry");
            }
        }
    }</pre>The call to validate is indirectly tested through BlogRepository.SaveChanges which calls IBloggingEntities.SaveChanges which is tested with FakeBloggingEntities. FakeBloggingEntities is a stub class and not real code, it’s faking the infrastructure of the EF, when we should be testing our validate method directly. We should rewrite this test as follows and avoid the need for a repository expectation: <pre class="brush: c#">        [TestMethod]
        [ExpectedException(typeof(InvalidOperationException))]
        public void AttemptedDoublePostedComment()
        {
            var repository = new BlogRepository(_context);
    
            Comment c = new Comment()
            {
                ID = 45,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Body = "EF4 is cool!",
                Title = "comment #1",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            Comment c2 = new Comment()
            {
                ID = 46,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Body = "EF4 is cool!",
                Title = "comment #1",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            c.Validate(ChangeAction.Insert);
        }</pre>
    

  3. Within a test, avoid unnecessary setup in the scenario (arrange portion). We do not need to set ID,Created,Posted and title for this test case.
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void AttemptedDoublePostedComment()
            {
                var repository = new BlogRepository(_context);

            Comment c = new Comment()
            {
                Body = "EF4 is cool!",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            Comment c2 = new Comment()
            {
                Body = "EF4 is cool!",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            c.Validate(ChangeAction.Insert);
        }</pre>
    

  4. If there are other dependencies for a test case, we should avoid setting them up as global variables. Instead, create them as needed with helper methods. This increases readability and maintainability as we can quickly look at how the person and post are created without going to the setup method. It also decouples the test from the test class so it can be moved if needed when refactoring. Notice we no longer need the repository to pass setup from the setup method to the test case! This is production code that isn’t needed but was added without even testing it! (BlogRepository.GetPersonByEmail & BlogRepository.GetPostByID)
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void AttemptedDoublePostedComment()
            {
                var person = GetNewPerson();
                var post = GetNewPost();
                Comment c = GetNewComment();
                c.Body = "EF4 is cool!";
                c.Person = person;
                c.Post = post;
                Comment c2 = GetNewComment();
                c2.Body = "EF4 is cool!";
                c2.Person = person;
                c2.Post = post;

            c.Validate(ChangeAction.Insert);
        }
    
        private Comment GetNewComment()
        {
            return new Comment();
        } 
    
        private Person GetNewPerson()
        {
            return new Person { User = new User() };
        } 
    
        private Post GetNewPost()
        {
            return new Post();
        } </pre>
    

  5. Avoid duplicated strings. If they have meaning, then share that with a common variable. In this example the body is duplicated for testing validation, so duplicatedBody helps convey that in the name of the variable. Also, avoid meaningful variable values unless they are specific to the test case, to avoid distracting future readers. In this case we should replace duplicatedBody with a valid and simple value to imply that it isn’t important to the test case (“EF4 is cool!” now becomes “A”).
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void AttemptedDoublePostedComment()
            {
                var person = GetNewPerson();
                var post = GetNewPost();
                var duplicatedBody = "A";
                Comment c = GetNewComment();
                c.Body = duplicatedBody;
                c.Person = person;
                c.Post = post;
                Comment c2 = GetNewComment();
                c2.Body = duplicatedBody;
                c2.Person = person;
                c2.Post = post;

            c.Validate(ChangeAction.Insert);
        }</pre>
    

  6. Use AAA test naming to help convey what you are testing and give all variables meaningful names according to the scenario they represent. This method is testing the Validate operation (Act in AAA), the scenario (Arrange in AAA) is a duplicated comment, and the expectation (Assert in AAA) is an InvalidOperationException. The Act_Arrange_Assert naming style from Roy Osherove's The Art of Unit Testing allows test case comprehension without looking at the test code, much like an interface should tell the behavior regardless of implementation! We should rename the comment variables from c and c2 to firstComment and duplicatedComment (Note: if you cannot use var for your type and still understand what the variable represents then you have a misnamed variable).
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void Validate_DuplicatedComment_ThrowsInvalidOperationException()
            {
                var person = GetNewPerson();
                var post = GetNewPost();
                var duplicatedBody = "A";
                var firstComment = GetNewComment();
                firstComment.Body = duplicatedBody;
                firstComment.Person = person;
                firstComment.Post = post;
                var duplicatedComment = GetNewComment();
                duplicatedComment.Body = duplicatedBody;
                duplicatedComment.Person = person;
                duplicatedComment.Post = post;

            firstComment.Validate(ChangeAction.Insert);
        }</pre>
    

  7. Avoid attributed exception assertions as any code in the test could fulfill the assertion. Instead, use inline assertions to test this. In the example below we can setup an action (delegate) that will execute the method under test, then pass that to a method that runs it and will return a failing test if the exception is not thrown (Note: inline exception assertions are a feature in NUnit and several other test frameworks but not in MSTest that I know of).
            [TestMethod]
            public void Validate_DuplicatedComment_ThrowsInvalidOperationException()
            {
                var person = GetNewPerson();
                var post = GetNewPost();
                var duplicatedBody = "A";
                var firstComment = GetNewComment();
                firstComment.Body = duplicatedBody;
                firstComment.Person = person;
                firstComment.Post = post;
                var duplicatedComment = GetNewComment();
                duplicatedComment.Body = duplicatedBody;
                duplicatedComment.Person = person;
                duplicatedComment.Post = post;

            Action validate = () =&gt; firstComment.Validate(ChangeAction.Insert);
    
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }</pre>
    

  8. Arrange test code into three sections, Arrange, Act and Assert (AAA syntax). You will notice we already refactored to this when applying the other tips above. This way the first block of code is always the scenario (arrange), the second block is the action, or deferred action when testing exceptions. Finally the last block is the assertion. Notice how these flow naturally from the AAA test case naming convention. One of the things Roy points out in his book is that if you cannot name a method with AAA syntax then you have not thought through what the method is actually testing. If the authors of the blog post would have named the method with A_A_A syntax and organized it into AAA blocks, they would have noticed the operation under test (Validate) was not mirrored in the test code!
            [TestMethod]
            public void Validate_DuplicatedComment_ThrowsInvalidOperationException()
            {
                // Arrange
                var person = GetNewPerson();
                var post = GetNewPost();
                var duplicatedBody = "A";
                var firstComment = GetNewComment();
                firstComment.Body = duplicatedBody;
                firstComment.Person = person;
                firstComment.Post = post;
                var duplicatedComment = GetNewComment();
                duplicatedComment.Body = duplicatedBody;
                duplicatedComment.Person = person;
                duplicatedComment.Post = post;

            // Act
            Action validate = () =&gt; firstComment.Validate(ChangeAction.Insert);
    
            // Assert
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }</pre>
    

  9. Here is their other test, before applying any of the tips above:
            [TestMethod]
            [ExpectedException(typeof(InvalidOperationException))]
            public void AttemptBlankComment()
            {
                var repository = new BlogRepository(_context);

            Comment c = new Comment()
            {
                ID = 123,
                Created = DateTime.Now,
                Posted = DateTime.Now,
                Body = "",
                Title = "some thoughts",
                Person = repository.GetPersonByEmail("jonaneja@microsoft.com"),
                Post = repository.GetPostByID(1),
            };
    
            repository.AddComment(c);
            repository.SaveChanges();
        }</pre>Here is the test after. The Person and Post properties aren't needed for this test but they are required as the Validate tests with these things for our other test case. If these are part of a “default” test object for a comment we could move them to GetNewComment() and have them available for any test case. <pre class="brush: c#">        [TestMethod]
        public void Validate_BlankComment_ThrowsInvalidOperationException()
        {
            var blankComment = GetNewComment();
            blankComment.Body = string.Empty;
            blankComment.Person = GetNewPerson();
            blankComment.Post = GetNewPost();
    
            Action validate = () =&gt; blankComment.Validate(ChangeAction.Insert);
    
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }</pre>
    

  10. Going back to focusing on the unit itself, needing a post to validate duplicated comments is a smell. This highlights that the concern for validating a duplicated comment should be moved to the post class.
    public partial class Post : IValidate 
    { 
      ... 
      void IValidate.Validate(ChangeAction action) 
      { 
        //prevent double-posting of Comments 
        if (this.Comments.Any(c => this.Comments.Any(other => other != c && other.Body == c.Body && other.Person.User == c.Person.User))) 
          throw new InvalidOperationException("A comment with this exact text has already been posted to this entry");
       } 
    } 

    (Note: this is going to validate existing comments that are duplicated in addition to new ones, but it points out the issue of SRP for validation and that it really belongs here, maybe someone else can point out how we can work with change sets to determine if the duplication was already in place or is new)

    The first test would be moved to PostTests test class. We no longer need the relationship from Comment->Post in our domain, this is domain distillation! Notice how easy it would be to move our first test to a new PostTests class as we don't have a dependency to the setup method! If we needed to share the GetNewXyz() methods we could even move those to a base test fixture.
            // PostTests.cs
            [TestMethod]
            public void Validate_DuplicatedComment_ThrowsInvalidOperationException()
            {
                var person = GetNewPerson();
                var post = GetNewPost();
                var duplicatedBody = "A";
                var firstComment = GetNewComment();
                firstComment.Body = duplicatedBody;
                firstComment.Person = person;
                post.AddComment(firstComment);
                var duplicatedComment = GetNewComment();
                duplicatedComment.Body = duplicatedBody;
                duplicatedComment.Person = person;
                post.AddComment(duplicatedComment);

            Action validate = () =&gt; post.Validate(ChangeAction.Insert);
    
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }
    
        // CommentTests.cs
        [TestMethod]
        public void Validate_BlankComment_ThrowsInvalidOperationException()
        {
            var blankComment = GetNewComment();
            blankComment.Body = string.Empty;
    
            Action validate = () =&gt; blankComment.Validate(ChangeAction.Insert);
    
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }</pre>
    

    Look how simple the second test case is now! Beautiful, the intent is so clear!

  11. Don't write code beyond what a test case expects. In the sample above, we have ChangeAction.Insert as an input to validate but the test doesn’t dictate that in the scenario. We might want this test case to cover both insert and update. We could parameterize the test for both ChangeAction.Insert and ChangeAction.Update. This is using the NUnit framework for parameterized tests. I like to give a distinct name to each of the scenarios when I parameterize a test, so they are explicit when a test runner reports the output of a failure. If we wanted different behavior, we could create two tests but in this case we don't. Be very cautious not to go over board with parameterized tests, they should only be used in the very simplest of test case duplication.
            // CommentTests.cs
            [Test]
            [TestCase(ChangeAction.Insert, TestName="Validate_InsertBlankComment_ThrowsInvalidOperationException")]
            [TestCase(ChangeAction.Update, TestName="Validate_UpdateBlankComment_ThrowsInvalidOperationException")]
            public void Validate_BlankComment_ThrowsInvalidOperationException(ChangeAction changeAction)
            {
                var blankComment = GetNewComment();
                blankComment.Body = string.Empty;

            Action validate = () =&gt; blankComment.Validate(changeAction);
    
            Assert.Throws&lt;InvalidOperationException&gt;(validate);
        }</pre></li></ol>
    

    These are a few testing tips that I use to write and maintain tests every day.

    Happy Testing!

12.19.2009

That is testability, not Test Driven Development!

I applaud efforts to encourage test driven development, however I find myself cringing at the examples being produced by the framework designers we are supposed to look up to. I've noticed this with the buzz around TDD and ASP.Net MVC and now that buzz is transferring to the Entity Framework. I think it is wonderful that these frameworks are designed with testability in mind. However, it is up to the developer to actually employ TDD when composing their applications. There are other ways to test, such as TLD (test last) or just simple TFD (test first) without the extra intricacies of TDD. So enough with TDD and frameworks, just show us the testability! Testability of course being able to test our code that uses your framework, meaning we can isolate your infrastructure from our components.

If you want to demonstrate that for us, just demonstrate it with simple tests of stubs. It’s rather crazy to try to show it through TDD! TDD is a very incremental process that is almost impossible to blog an example about. Anyone who has read Kent Beck's book, yes I said book not blog, knows that by the end he barely produces a handful of tests! This is the very nature of documenting the process, step by step, of using TDD. I guess blog authors just love to get TLAs into their posts, especially TDD, look I did it too! This often leads to confusion about what TDD is and leads many to think it is synonymous with testing.

A very recent example is with the entity framework article “Walkthrough: Test-Driven Development with the Entity Framework 4.0.” This post, from the title, seems to be an example of TDD with EF4. Reading through it, the first test doesn’t get written until step 14! Prior to that a bunch of production code is composed. This is known as TLD (test last). A bunch of unrelated test code is produced before the heart of what they are testing is addressed in step 22. This happens because the author is taking an integration style approach to testing validation from the repository level and not from the class that implements the concern of validation. This leads to all the excess code produced before step 14 and from 15 to 21, not related to the test cases. I can imagine this happened because the author wanted to demonstrate stubbing with a fake repository in EF, which is a good thing to demonstrate, but doesn’t require TDD to demonstrate, SRP of demonstrations please! This post would probably be better named “Walkthrough: Testability with the EF 4.0.” No offense to the author, but please help us readers know what we are getting into, especially if we are learning something new!

The basics of TDD that are missing from this post:

  1. Test list – a list of scenarios that you have yet to write, a simple to do list. As TDD is very involved with one test at a time, this is a scratch pad of future tests.
    1. Items are added as you go
    2. Items are crossed off as they are completed
  2. Red/Green/Refactor workflow
    1. Write test code first
      1. Don’t write unnecessary test code (especially in setup)
        1. Look at the setup method in step 11, we are writing setup without even knowing what our first test case is!
          1. Person – fn,ln,password are never used
          2. Blog – name, url are never used
          3. Post – Created,Posted,Title,PermaUrl (has string concatenation!!!) and Body are never used
        2. Look at the setup in step 14 of AttempteddoublePostedComment
          1. Each comment defines Id, Created, Posted, Body and Title which are never used! Note: Body not being empty would trigger a failure when writing the next test, but you don’t deal with that until you write that test, not when TDDing.
        3. Unnecessary test code decreases maintainability and readability of tests
    2. Get it to compile
      1. Compilation errors are failures
      2. Just enough to get it to pass
    3. Get it to run and fail
      1. Run time failures of expectations
    4. Implement just enough production code to get it to pass under the given scenario
      1. This often means incomplete or incorrect production code, just enough to get it to pass
        1. Why does step 19 implement Attach/Detach methods, why does it implement queryable operations?
        2. Why does step 20 implement GetPersonByEmail, GetPostById? These are smells that we are testing too much at once! This is a unit test smell I will talk about in my next post with testing tips. Some of this is needed because of the integration style testing that is going on in this post, my testing tips will tackle and remove this complexity.
          1. We now have code that isn’t tested in production that likely has needs to be tested. Things like GetPersonById and all the other query/add operations on the BlogRepository!
    5. Refactor
      1. Remove duplication between test and production code, or in production code alone
      2. Perform any other optimizations
    6. Make sure test still passes
    7. Repeat

I hope that this helps clear up some of the confusion that might exist around the differences between testability and TDD. If you want to know more I highly suggest Kent Beck’s book “Test Driven Development: By Example”, it is a great introduction to TDD with fully documented walkthroughs!

See my next blog post where I continue an analysis of this article and offer up some testing tips.

7.30.2009

NUnit 2.5.1 framework with Silverlight 3

We’re doing some Silverlight development at work and I’m doing some off to the side. I like to stick with a familiar, proven unit test framework. NUnit 2.5.1. has some great new attributes to use during testing (actually these came out in 2.5) such as Timeout and MaxTime. To take advantage of them you have to have a version of NUnit compiled as a silverlight library or at least recompiled referencing the core of silverlight (system.dll and mscorlib versions 2.0.5.0).

In the spirit of continuing what a few other people have done, I have compiled the latest version of the NUnit framework into a SL library and want to make it available. The work of the following guys has helped me use NUnit up to version 2.4 and helped me get 2.5.1 going as well!

A nice template that included the SL framework version, if someone wants this updated please let me know and I can do that too, otherwise I’ll just post my dlls: http://weblogs.asp.net/nunitaddin/archive/2008/05/01/silverlight-nunit-projects.aspx

A set of instructions to port the framework to SL (I diverged on some of these, see below): http://www.jeff.wilcox.name/2009/01/nunit-and-silverlight/

Here is the list of changes and what didn’t make the cut:

  • BinarySerializableConstraint - SL core doesn’t have BinarySerializer
  • AssertionException, IgnoreException, InconclusiveException, SuccessException - cannot be serialized, SL core doesn’t have SerializationAttribute (goes along with BinarySerializer)
  • RequiresSTA and RequiresMTA attributes

How was this pulled off?

  • For both the nunit.framework and nunit.framework.tests I added new silverlight libraries and copied the links to the code that the respective projects already referenced. I also added a nunit.silverlight to drop in some of the following additional classes that are needed but not in the SL core.
  • I want to thank Jeff Wilcox for his original work on some shims

    • ArrayList which I added a ToArray method that takes a type, of course this was ripped off of the ArrayList (via reflector) in the .Net core so someone should let me know if MSFT is going to come take away my computer for publishing this
    • Hashtable
  • I added a ListDictionary which is just a version of the generic Dictionary class with key and value set to type object
  • String.Compare(string,string,bool) - a version of string comparison not included in the silverlight core, again thank you reflector
  • I made up an ApplicationException as this wasn’t in the SL core
  • I made a version of the non generic Comparer class that isn’t in the SL core, thanks reflector
  • I added the compiler directive NET_2_0
  • A lot of the functionality that was removed was done so by leveraging the SILVERLIGHT compiler directive to exclude the relevant code.

I also went into the framework tests and tweaked a new solution to work with silverlight. I had to modify the tests as follows:

  • CanTestContentsOfSortedList excluded - no SortedList in SL
  • Remove references and tests with System.Data types (NotGreaterEqualIComparable, NotGreaterIComparable, NotLessEqualIComparable, NotLessIComparable)
  • Remove SameColorAs constraint, which was only referenced by NamedAndUnnamedColorsCompareAsEqual, which was commented out anyways
  • Remove BinarySerializableTest and XmlSerializableTest tests that depend on Serializable attribute
  • Changed DifferentEncodingsOfSameStringAreNotEqual test to use System.Text.Encoding.UTF8.GetString(byte[],start,length) overload as (byte[]) overload isn’t available in SL
  • Removed CodeShouldNotCompile and CodeShouldNotCompileAsFinishedConstraint as Microsoft.CSharp.CSharpCodeProvider is not available in SL

So, altogether the changes were minimal and there are 1273 tests passing on the SL version of the framework!

To use these assemblies:

  1. Setup a SL library as your test library.
  2. Download the SL libraries here
  3. Include a reference to the nunit.framework.dll.
  4. If you have trouble getting tests to run (missing file exceptions to SL assemblies) then set “Copy Local: true” on the appropriate assembly reference in the properties. I always have to do this for the system SL assembly.

If anyone wants more source for what I did or more bits of the nunit framework (like mocks or w/e) ported, please let me know and I will get that done, if possible :) For now I’m happy with just the framework as I use Rhino.Mocks (which already has a SL port) as an isolation container.

-Wes