7.23.2010

What we are doing with HtmlTags Part 1 : Why HtmlTags?

Back in January, Jeremy Miller posted a nice article on HtmlTags: Shrink your Views with FubuMVC Html Conventions. We were immediately in love with the idea and have spent several months adapting the conventions to work with our ASP.Net MVC applications. I was having a conversation with Ryan recently, reflecting on how far we’ve come and how we had no vision of that when we first read that article. I want to share some of that, so I will be working on a series of blog posts to show “What we are doing with HtmlTags.” But first: why?

A lot of buzz has been generated around reusability and views, particularly html. Html being a highly compositional language, but lacking any ability for reuse, has led to copy paste hell. Several different philosophies are employed in View Engines (VE) to try to address this issue and recently MVC2 was released with it’s templated helpers. The problem with reusing html directly is that lacks the benefits of separating the concern of what we need from the html (building it) and generating it, classically, the difference between “What” I want versus “How” I get it. The power of the Linq abstraction shows the benefits of separating “What” versus “How.” HtmlTags is another model based approach that separates the “What” and “How.” It goes even further by allowing the “How” to be configured via conventions. Here is a list of the benefits this separation has brought to our team:

  1. View engine agnostic. HtmlTags is not coupled to any particular VE. It can even be returned from a controller action and rendered directly to the response!
  2. Testability with out spinning up a costly integration environment, I can write assertions directly against the model!
    1. Html is a language that lacks the ability to query it in any reasonable fashion.
    2. We integrate security as an aspect to our conventions that build our Html. Now, we can write a simple test to ensure that the security shows/hides elements in the model by simply asserting the lack of their presence! Try parsing html to do this after you spin up an integration test to access the view output!
  3. Translation: HtmlTags is a model that is easily traversed. We leverage this to transform the html model into an export model that can then be translated to csv, pdf and other formats. Now we get exports for free on any view result simply by writing the guts of the view in HtmlTags!
  4. The ability to translate from our data model to the html model directly, in c#, instead of using a view engine with a mish mash of concerns (html & code).
    1. This avoids reshaping of our data to get it to easily work with our view engine. Instead we can translate directly, on our view models, and have the full power of c# at our finger tips.
    2. This also helps with refactoring, where the story is still very poor with VE content.
    3. Simply put: avoids the html / code disparity
  5. Composable conventions, we build upon the HtmlTags DisplayFor, LabelFor and InputFor to build up higher level concepts in our application, like a form field (composed of fields for a label, input, validation etc). This gives us a on stop spot to restructure the html for every form in our application, no more shotgun surgery on views!
    1. We also leverage concepts around menus, buttons, forms etc.
  6. Reusable across projects: we don’t have to try to copy/paste a bunch of templates, we simply drop in a dll with default conventions and configure from there.
  7. Avoid fat fingering html tag names, attributes etc. We don’t have to think as much about html anymore, amazing concept! Templated html still has issues with fat fingering as <div> is all over whereas we have a nice Tags.Div static entry point to generate a div tag. We do this for all tags we use. This is the concept of reuse between templates that is rather lacking from MVC2 templates without bee sting hell. This is a great example of leveraging a statically typed language to fail fast, we don’t have to render a few to find out if we fat fingered some html. All html tags, attributes etc are kept as constants collections.
  8. Avoid hard coded css classes: we make our class names configurable and put them right with the builder so any project can override them. Try doing that without a bunch of bee stings in a templated helper and a nightmare of classes coupled to the template to provide the constants in code. This reminds me of code behind hell from web forms days. Then try reusing that across projects.
    1. I’ve seen some really scary stuff in samples with templated helpers where people try to access attributes. property types, etc and conditionally render sections. The only alternative is to put that code into helpers that are separated from the view and lead to a coupling mess when trying to reuse the templates across projects.

The next set of posts will cover examples of how we are using HtmlTags and how it’s paying dividends.

-Wes

4.28.2010

How I do VCS

After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts. To start this out, I want to talk about how I use VCS in a team environment. These come in a series of tips or best practices that I try to follow.

Note: This list is subject to change in the future.

Note: I edited this to make it more friendly, I was too opinionated when I first wrote this.

  1. Always use some form of version control for all aspects of software development.
    1. Development is an evolution. Looking back at where we were is an invaluable asset in that process. This includes data schemas and documentation.
    2. Reverting / reapplying changes is absolutely critical for efficient development.
    3. The tools I use:
      1. Code: Hg (preferred), SVN
      2. Database: TSqlMigrations
      3. Documents: Sometimes in code repository, also SharePoint with versioning
  2. Always tag a commit (changeset) with comments
    1. This is a quick way to describe to someone else (or your future self) what the changeset entails.
    2. Be brief but courteous.
    3. One or two sentences about the task, not the actual changes.
    4. Use precommit hooks or setup the central repository to reject changes without comments.
  3. Link changesets to documentation
    1. If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit. This helps locate more information about the commit and/or related changesets.
    2. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget.
  4. Ability to work offline is required, including commits and history
    1. Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS. I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository.
  5. Never lock resources (files) in a central repository!
    1. We have great merge tools now, merging sucked a long time ago, it doesn’t anymore!
  6. Always review everything in your commit.

    1. Avoid committing without reviewing the changes in each file.
    2. If you leave to make changes during a review, start the review over when you come back. Never assume you didn’t touch a file, double check.
      1. This is another reason why you want to avoid large, infrequent commits.
    3. Requirements for tools
      1. Quickly show pending changes for the entire repository.
      2. Default action for a resource with pending changes is a diff.
      3. Pluggable diff & merge tool
      4. Produce a unified diff or a diff of all changes. This is helpful to bulk review changes instead of opening each file.
    4. The central repository is not your own personal dump yard.
    5. If you turn on Visual Studio’s commit on closing studio option, I will be very sad :(.
  7. Commit (integrate) to the central repository / branch frequently
    1. I try to do this before leaving each day, especially without a DVCS. One never knows when they might need to work from remote the following day.
  8. Never commit commented out code
    1. If it isn’t needed anymore, delete it!
    2. If you aren’t sure if it might be useful in the future, delete it!

      This is why we have history.

    3. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it.
  9. Don’t commit build artifacts, user preferences and temporary files.
    1. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin*, obj*, .dll, .exe)
    2. User preferences are your settings, don't override other team member preference files! (ie: .suo and .user files)
    3. Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file. Set this up as a first step when creating a new repository!
  10. Be polite when merging unresolved conflicts.
    1. Count to 10, grab a stress ball and realize it’s not a big deal. Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them.
    2. Following the other rules, especially committing frequently, will reduce the likelihood of this.
    3. Don’t blindly merge and commit your changes. Make sure you understand why the conflict occurred and which parts of the code you want to keep.
    4. Apply scrutiny when you commit a manual merge: review the diff!
    5. Make sure you test the changes (build and run automated tests)
  11. Become intimate with your version control system and the tools you use with it.
    1. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc. Create test repositories and walk through common scenarios.
    2. Find the most efficient way to do your work. These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI.
      1. I like a combination of both Tortoise Hg and hg cli to get the job efficiently.
  12. Always tag releases
    1. Create a way to find a given release, whether this be in comments or an explicit tag / branch. This should be readily discoverable.
    2. Create release branches to patch bugs and then merge the changes back to other development branch(es).
  13. If using feature branches, strive for periodic integrations.
    1. Feature branches often cause forked code that becomes irreconcilable. Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into. This will avoid merge conflicts in the future.
    2. Feature branches are best when they are mutually exclusive of active development in other branches.
  14. Use and abuse local commits

    ,
    at least one per task in a story.
    1. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete.
  15. Never commit a broken build or failing tests to the central repository.
    1. It’s ok for a local commit to break the build and/or tests. In fact, I encourage this if it helps group the changes more logically. This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes).
  16. Avoid committing sensitive information
    1. Especially usernames / passwords

There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code. I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries. Please feel free to share your ideas about this below.

-Wes

vcs 
3.4.2010

Adopting DBVCS

Identify early adopters

Pick a small project with a small(ish) team. This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board!

Research

Research the tool(s) that you want to use. Some tools provide all of the features you would need while some only provide a slice of the pie. DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next. Ideally a tool can track database versions and automatically apply updates. The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption. Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes.

Don’t settle on just one tool, identify several. Then work with the team to evaluate the tools. Have the team do some tests of the following scenarios with each tool:

  1. Baseline an existing database: can the migration tool work with legacy databases? Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs.
  2. Add/drop tables
  3. Add/drop procedures/functions/views
  4. Alter tables (rename columns, add columns, remove columns)
  5. Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type. This is a requirement for a migrations platform. Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data.
  6. Run the tool via the command line. If you cannot automate the tool in Continuous Integration what is the point?
  7. Create a copy of a database on demand.
  8. Backup/restore databases locally.

Let the team give feedback and decide together, what tool they would like to try out.

My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms. In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL. Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways. IMO stick to SQL based migrations.

Reconciling Production

If your project is a legacy application, you will need to reconcile the current state of production with your development databases. Find changes in production and bring them down to development, even if they are old and need to be removed. Once complete, produce a baseline of either dev or prod as they are now in sync. Commit this to your VCS of choice.

Add whatever schema changes tracking mechanism your tool requires to your development database. This often requires adding a table to track the schema version of that database. Your tool should support doing this for you. You can add this table to production when you do your next release.

Script out any changes currently in dev. Remove production artifacts that you brought down during reconciliation. Add change scripts for any outstanding changes in dev since the last production release. Commit these to your repository.

Say No to Shared Dev DBs

Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database? If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS). Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.

First prod release

Copy prod to your beta/testing environment. Add the schema changes table (or mechanism) and do a test run of your changes. If successful you can schedule this to be run on production.

Evaluation

After your first release, evaluate the pain points of the process. Try to find tools or modifications to existing tools to help fix them. Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible. This is why I suggest open source alternatives. Nothing is set in stone, a good example was adding transactional support to TSqlMigrations. We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!

Another good example is generating change scripts. We have been manually making these for months now. I found an open source project called Open DB Diff and integrated this with TSqlMigrations. These were things we just accepted at the time when we began adopting our tool set. Once we became comfortable with the base functionality, it was time to start automating more of the process. Just like anything else with development, never be afraid to try to find tools to make your job easier!

Enjoy

-Wes

db vcs 
3.4.2010

Database version control resources

In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS. This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence. In the next few posts I will try to explain how I used these to create TSqlMigrations.

Blogs entries

Three rules for database work - K. Scott Allen

http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx

Versioning databases - the baseline

http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx

Versioning databases - change scripts

http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx

Versioning databases - views, stored procedures and the like

http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx

Versioning databases - branching and merging

http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx

Evolutionary Database Design - Martin Fowler

http://martinfowler.com/articles/evodb.html

Are database migration frameworks worth the effort? - Good challenges

http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx

Continuous Integration (in general)

http://martinfowler.com/articles/continuousIntegration.html

http://martinfowler.com/articles/originalContinuousIntegration.html

Is Your Database Under Version Control?

http://www.codinghorror.com/blog/archives/000743.html

11 Tools for Database Versioning

http://secretgeek.net/dbcontrol.asp

How to do database source control and builds

http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html

.Net Database Migration Tool Roundup

http://flux88.com/blog/net-database-migration-tool-roundup/

Books

Book

Description

Refactoring Databases: Evolutionary Database Design

Martin Fowler signature series on refactoring databases.

Book site: http://databaserefactoring.com/

Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut)

A good question/answer layout of common problems and solutions with database version control.

http://www.informit.com/store/product.aspx?isbn=032150206X

db vcs 
2.25.2010

Sexy Windsor Registrations

After working with FluentNHibernate and seeing examples of registries in StructureMap, I started craving the same thing for my registrations with Windsor. Our registrations often look like the following:

public static void Register(IWindsorContainer container)
{
  container.Register(Component.For<IFoo>().ImplementedBy<Foo>());
  container.AddComponent<IFoo, Foo>();
  ...
}

There are a few things I don’t like about this approach:

  1. Passing a container around through static methods is a hack.
  2. Ceremony of “container.” calls clutter the registry and impede readability.
  3. Why do I need so much ceremony to get to Component.For? “container.Register(Component.For” is tedious!

Note: this code is available on github. The registries are tested with nunit, so you can drop in whatever version of windsor and verify it works.

IWindsorInstaller to the static rescue

A registry needs a uniform entry point. Enter IWindsorInstaller, a rather undocumented feature that deserves more attention.

 public interface IWindsorInstaller
  {
    void Install(IWindsorContainer container, IConfigurationStore store);
  }

  // Container entry point
  container.Install(IWindsorInstaller installer);

The container has an install method that takes an instance of IWindsorInstaller. See this post for more details about IWindsorInstaller.

Adapting to Component.For

Now to fix readability, what if we could:
  public class SampleRegistry : RegistryBase
  {
    public SampleRegistry()
    {
      For<IFoo>().ImplementedBy<Foo>().LifeStyle.Singleton();
      For<IFoo>().ImplementedBy<Foo>();
    }
  }
To pull this off, the RegistryBase keeps a collection of registrations and adapts to the Component.For entry point to capture the registration before returning it. These registrations are stored in a Steps collection, more on why this isn't called Registrations later.
  public class RegistryBase : IWindsorInstaller
  {
    ...

    public ComponentRegistration<S> For<S>()
    {
      var registration = Component.For<S>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F>()
    {
      var registration = Component.For<S, F>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2>()
    {
      var registration = Component.For<S, F1, F2>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2, F3>()
    {
      var registration = Component.For<S, F1, F2, F3>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration<S> For<S, F1, F2, F3, F4>()
    {
      var registration = Component.For<S, F1, F2, F3, F4>();
      Steps.Add(registration);
      return registration;
    }

    public ComponentRegistration For(params Type[] types)
    {
      var registration = Component.For(types);
      Steps.Add(registration);
      return registration;
    }
    ...
  }

Installation

RegistryBase implements the IWindsorInstaller.Install method to add registrations to the container.

  public virtual void Install(IWindsorContainer container, IConfigurationStore store)
  {
    Steps.ForEach(s => container.Register(s));
  }

The application bootstrapper adds registries. This example assumes all registries are loaded into the container, though they probably would never have dependencies (chicken/egg paradox). I just like to abuse my container :)

  private static void LoadRegistries(IWindsorContainer container)
  {
    var registries = container.ResolveAll<IWindsorInstaller>();
    registries.ForEach(r => container.Install(r));
  }

Other adaptations

The registry also adapts to a few other entry points and captures their registrations.

  1. container.AddComponent
  2. container.AddFacility
  3. AllTypes.FromAssemblyNamed
  4. AllTypes.FromAssembly
  5. AllTypes.FromAssemblyContaining

Adapting to the unknown

In the event there is an entry point missing from the registry, it has a Custom method that takes an Action. This allows for access to the container as usual. This is captured as a deferred action that won't be executed until the registry is installed in the container. Hence the name "Steps" for registrations and custom actions.

Show me the money

Here is a sample of different useages of the registry, of course the entire fluent registration API is at your finger tips.

  public class SampleRegistry : RegistryBase
  {
    public SampleRegistry()
    {
      // Register a singleton
      For<IFoo>().ImplementedBy<Foo>().LifeStyle.Singleton(); // Extension methods to call property.

      // Register a single item
      For<IFoo>().ImplementedBy<Foo>();
      For(typeof (IFoo)).ImplementedBy<Foo>();
      AddComponent<IFoo, Foo>();

      // Custom actions if you want to access the original container API, with deferred installation via lambda expressions
      Custom(c => c.AddComponent<IFoo, Foo>());
      Custom(c => c.Register(Component.For<IFoo>().ImplementedBy<Foo>()));

      // Scan for types
      FromAssemblyContaining<SampleRegistry>().BasedOn<IFoo>();
      FromAssemblyContaining(typeof (SampleRegistry)).BasedOn<IFoo>();
      FromAssemblyNamed("GotFour.Windsor.Tests").BasedOn<IFoo>();
      FromAssembly(typeof (SampleRegistry).Assembly).BasedOn<IFoo>();

      // Forwarding types
      For<IFoo, Foo>().ImplementedBy<Foo>();
      For<IFoo, Foo, FooBar>().ImplementedBy<FooBar>();
      For<IFoo, Foo, FooBar, FooBar2>().ImplementedBy<FooBar2>();
      For<IFoo, Foo, FooBar, FooBar2, FooBar3>().ImplementedBy<FooBar3>();

      // Adding facilities
      AddFacility<StartableFacility>();
    }
  }

Notes: I have tested capturing registrations for all of the above scenarios but I suppose there might be some deep dark portion of the registration API that might not work. This would happen if something creates a brand new registration, independent of the original captured one. I have yet to run into this, the design of the api is pretty rock solid as a builder that collects state. I left out AllTypes.Of.From since Of doesn't return a registration, it is simply a call to AllTypes.FromAssemblyXyz().BasedOn() reversed and really isn't very helpful.

ExtendedRegistryBase : RegistryBase

I added an extended set of registration points with ExtendedRegistryBase. This adds another layer of new fluent registrations for common scenarios, often involving convention based registration :) If you have additions, please add them in the comments and I will get them added.

  public class SampleExtendedRegistry : ExtendedRegistryBase
  {
    public SampleExtendedRegistry()
    {
      // Same as scanning above in SampleRegistry but much cleaner!
      ScanMyAssemblyFor<IFoo>();

      // Scan for all services of the pattern Service : IService
      ScanMyAssembly(Conventions.FirstInterfaceIsIName);

      // Scan for all services of the pattern Whatever : IService (register with first interface)
      ScanMyAssembly(Conventions.FirstInterface);

      // Next we could use some attributes to discover services, to register imports / exports :)
    }
  }

This registry class helps avoid the static calls to registries from my applications. Now I can scan for registries of a known type and install them into the container. The registries are much more readable with the ceremony gone. I know there was talk of adding something like this to the next version of Windsor/MicroKernel. I hope this is the direction that effort is headed towards. In the mean time enjoy this as a fix to the cravings for a cleaner registry. I typically add one of these per project and let it control registration within that layer.


Shout it

kick it on DotNetKicks.com

-Wes