Wednesday, February 4, 2009

Are Properties really a bad idea?

Read more about this article at my web site.

Like many experienced programmers over the years, I have been exposed to hundreds of different programming styles,


read long philosophical threads about programming methodologies new and old and have been fortunate enough to work with some really expert programmers.




For the most part, I have enjoyed the dialogue and somewhere along the way have learned to let go of of the small stuff,


that is unless of course the small stuff which I would otherwise overlook in the right context, is at risk of becoming


detrimental in the wrong context.




I can remember when I was learning to sail I kept calling those things that hung off the side of the boat bumpers


when in fact I should have been calling them fenders. My instructor immediately corrected me and said "its ok, that really


used to bother me but I've learned to let that one go." What he had learned through unfortunate experience, was that students


needed to understand immediately how to tie fenders on correctly well before the vessel reached the the slip so as not to


damage hull, less important to him was what we called them at the moment of impact. Besides, I would learn the correct


terminology with more practice soon enough.




So back to the subject at hand, properties. Properties are one of the small things I've learned to let go of over the years,


because there is just so much widespread confusion over them that they have become ubiquitous. It would require a huge


investment of resources and time to get emotionally tied to their "proper" use one way or the other, unless, as I mentioned


earlier, I feel they are endangering the codebase.




If you search for the effective use of properties, you'll find literally thousands of conflicting statements about why properties exist,


when to favor properties over member variables and even whether they are evil and should be exercised from all properly


object oriented code completely. Yet in object oriented programming classes, we're told that people have certain properties


like a name or sex or origin, and we should immediately start adding them to the person base class (another thing I'm learning to


let go of..) so that whatever extends that person class will pick up those common properties. OK fair enough, this is exactly how I learned about properties too.




So are properties really a bad idea?


My thesis is that as long as we don't violate some important design principals, properties are absolutely fine under most


circumstances. What are properties exactly? Well, if you wiki property, you'll find that a property is defined as a kind of hybrid


sort of class member somewhere between a data member and a method. In the spirit of keeping it simple, I have highlighted some rules about properties and how to use them without causing too much damage.




Maintainability:


Do the properties you are about to add offer more or less maintainability? In other words, are you writing a class to DoSomething() or are you writing a class that requires the dependant class to get all the information from you and then doSomething(). Good object oriented classes DoSomething(), not the other way around. It is generally acceptable to return an object or interface using property accessors. For instance, if our person class, is used by our personnel system, we expect the personnel system to ask the person for their address and to have the person return the address to the personnel system, what we don't want is to have the personnel system ask the person for the street, city, state, and zip code and then have the personnel system build the address, that would be counterintuitive.




Object Oriented or Procedural Programming:


Are you really writing OO code if you have properties everywhere? Consider for instance that procedural programming relies on local knowledge of objects to perform tasks while OO programming is based on use cases and conversations. Do getters and setters come up during conversations? I will reference the experts Kent Beck and Ward Cunningham for more detailed information but I encourage you to think about the classes you need, and the responsibilities they must perform as well as how they will collaborate. Beck and Cunningham developed some really good practices using CRC cards to model “conversations between classes”.




Encapsulation:


While properties offer better encapsulation than fields do, you shouldn't necessarily start blindly replacing fields with properties in every case. The more accessors you create, the more dependency you create and the more at risk you are to exposing implementation details of your class. Consider your DoSomething() methods, do they take reference or value parameters. Also consider the use of internal and private properties carefully, do you really need them?



State:


Getters and setters which "report" get or "change" set the state of some private member variable shouldn't change the state of the object itself, or any other unrelated object for that matter. Remember when I said that a property is somewhere between a method and a field or member? This is one of the potential hazards of treating your properties as methods. Code in getters or setters which change the state of the object itself will cause unbelievable headaches from adopters. Properties should be reliable, if you call any property repeatedly, you should be able to depend on the value returned from the getter.




Performance:


Impact on performance is another potential pitfall of treating your properties as methods. Performing lengthy or costly operations in getter or setter methods could significantly impact performance. Having for instance a large custom collection or web control object which is dynamically built or retrieved in your getter or setter methods could slow performance as objects are added and increase performance as objects are removed. This can lead to lots of head scratching from adopters, not to mention the loss of hours of productivity troubleshooting mysterious performance issues, which appear and disappear for no reason.




So should you use accessors or not? In my experience, when you start working for large corporations, refactoring decisions sometimes become more about ROI than style and quality is re-defined as the number lines of code over defects or the number of unit tests over uncovered blocks rather than the elegance of design or the implementation of industry best practices. Style and elegance often gives way to brute force. Human beings are certainty freaks, and we have to balance planning and risk with our "drop dead" delivery deadlines. There is never enough time up front but then again, nothing lives forever anyway right? To sum it up, if you are going to use accessors you need understand what they are for and for goodness sakes use them properly because, just to tie the metaphors together, a damaged hull is expensive to fix and if the hole is big enough you may never fully trust the vessel again.




Beck, K. & Cunningham, W. (1989) A laboratory for teaching Object-Oriented Thinking. Apple computer whitepaper, http://c2.com/doc/oopsla89/paper.html#references



Wagner, B. (2008) Choose between Methods and Properties. Visual Studio Magazine p.40 http://visualstudiomagazine.com/columns/article.aspx?editorialsid=2719


Haloub, A. (2003) Why getter and setter methods are evil. http://www.JavaWorld.com





Software Configuration Management Principals

Software Configuration Management Principals

SCM: Software Configuration Management serves as a mechanism for communication, change management and reproducibility.

Team Project: A TFS Team Project is a collection of artifacts used by a team to track a related set of work. You must create at least one Team Project before you can begin working in TFS.



In order to make decisions about how to best organize your projects in Team Foundation Server (TFS), it is important to understand that TFS best practices were developed around the concept of a "Team Project".

Scalability of Team Projects: Scalability is limited by the complexity work item types. MSF for Agile Development have been shown to support 500 team projects per server while MSF for CMMI Process Improvement have been shown to support 250 team projects per server.

Common strategies used to structure team projects

Team Project Per-Application (Most Common – Supports both large and small applications as well as multiple release and parallel development).

• Create one team project for each application being developed.

• Releases are manifested in TFS as source branches or as different nodes in the iteration class hierarchy.

Team Project Per-Release (Works well for large teams working on long-term projects).

• Every major application release starts a new team project.

Team Project Per-Team (Provides central control and monitoring of the activities of software development teams).

• Projects aligned with team boundaries in the organization.

• Cluster together applications developed by the same team.

Isolation Mechanisms in VSTF

Branch: An isolation level mechanism that allows one or more people to work on a set of files in isolation. A branch of a file is a revision of the file that uses the trunk versions as a starting point and evolves independently. Branches are labeled with a minor version number that corresponds to the major version number in the trunk. Typically, most of the work happens in the trunk or main branch, which is branched at every release. Small amount of work might still happen in the release branches, which is typically merged back into the main branch.

Note: Branching is the only isolation mechanism that provides collaboration and version changes among a group of developers.

Workspace: Allow individual developers to work in isolation from others. This is where the developer keeps all the artifacts needed to accomplish a task. Workspaces are normally associated with versions of the artifacts. Workspaces include any item that appears in the Source Control Explorer. Typically the following are stored in the source control:

• Source Code

• Test Code

• Library Files (.cs)

• Library Files for 3rd party or shared libraries (.dll)

• Scripts

Workspaces are associated with one or more code-lines.

Shelve set: When a file is changed in the work space, the source control marks it as being edited by the person who edited the file. However, this change is not reflected into the source control unless the file is checked in. There are scenarios where the user wants to save all the changes made to the workspace, but not check them into the source control. For example, the developer might want to send the changes to their peer/manager for a code review. For this purpose, the changes made can be saved as a shelveset, stored in the source control. The shelvesets are not incorporated into the source tree, but can be retrieved on demand and its contents can be viewed.

Typical Branching Strategies in VSTF

Branch Strategies begin by thinking about where isolation is needed that provides version control and the ability for multiple people to collaborate. Compose the isolation scenarios below to form your overall branching strategy.

1. Release Isolation – If you need to work on multiple releases in parallel you may want to isolate them in their own branch.

a. Note: the term “release” does not refer to a new version of the product; it may refer to releasing to a test team from a dedicated branch so issues can be fixed independently from development continuing on the development branch.

2. Feature Isolation – Isolating functionality that is experimental or risky enough to merit its own branch. Developers can collaborate on a feature without exposing the application to instability.

3. Team Isolation - Sub-teams work in isolation from each other, providing the ability to collaborate without breaking changes that other teams are working on.

4. Integration Isolation – Staging area for merges. Merges are often destabilizing, and maintain a branch where active development does not occur is often beneficial.

Branching Consideration

Over-branching is an easy mistake. Weigh the benefits of isolation against the costs so you can make appropriate decisions about the amount of process overhead you’re willing to incur in your environment.

1. Merging – Moving changes between branches. Changes can easily get merged incorrectly resulting in a destabilized environment. This is one reason for creating a staging area for merges. (above)

2. Latency – It takes time to move changes between branches. A Typical merge operation involves stabilizing the source branch, executing merge, conflict resolution then stabilization of the target branch. On large projects, scenarios taking days, weeks or even months are not uncommon.

Structuring your Branching Hierarchy

Structuring your branching hierarchy is important because merging along the hierarchy is easier than merging across the hierarchy.


1. Merging along the branch hierarchy - TFS tracks information about the relationship between branches in order to allow 3-way merges.

2. Merging across the branch hierarchy - If the information is not present, TFS doesn’t know about modifications made in the different branches. Without this information TFS assumes changes have been made in both branches, resulting in manual merges that are otherwise unnecessary. The first time you merge across hierarchy, the relationship is established and future merges are much simpler.

Note: Cross hierarchy merging is not supported through the GUI even after the relationship is established. Cross hierarchy merges can only be done via command line (tf.exe).

Choose a hierarchy that provides the appropriate isolation strategy while still supporting your merge hierarchy.

Branching concerns: A company’s branching model should match the business model. A company that wants frequent product releases may have complex branching structures and need time-intensive merges. Another company may have many customers using independent releases and few branches.

Labels in TFS: Labels in TFS are powerful in that they can contain versions of files from different points in time.



The canonical example is:


1. You label a build
2. You find bugs
3. You change only a few files.
The label represents a collection of points in time and not a single point in time.

Individual versions of files can be assigned labels, and all files with the same label forms a release group. Unlike VSS, TFS source control repository does not support linking to an item from multiple places in the source folder structure, nor does it allow an item to be "pinned" (allow different references to the same file from different directories to point to different versions in a way that cannot be further edited). - Wikipedia

Geographically located TFS structures: TFS Proxy server provides an experience for remote a user that is comparable to that of on-site users. Factoring geography into your decision is not necessary.


Common SCM Definitions

You can find a list of SCM Common Definitions here

References

Branching and Merging Primer

Team Foundation Branching Guidance

Whitepaper

Microsoft (2007)

V1 TSP VSTF Process Guidance

Book

Berczuk, S. & Appleton, B. (2002)

Software Configuration Management Patterns: Effective Teamwork, Practical Integration.

Boston: Addions-Wesley

External Links


Branching Structures at Microsoft

The Terminology of Branching

A Branching and Merging Primer

Branch Structure in Developer Division at Microsoft

Software Configuration Management with Visual Studio Team System

The purpose of this document is to provide a model for implementing Software Configuration Management in line with the business model and scalable enough to gain broad acceptance among development team members.

Goals

Define SCM concepts and principals within the context of Team Foundation Server to enable better communication and decision making.
Define a common methodology for implementing SCM based upon industry best practices and the development team scenarios.
Guidance and Policies

Software Configuration Management (SCM)

SCM Guidance

SCM Policies and Naming Conventions (Coming Soon...)

Sarbanes-Oxley (SOX) Compliance (Coming Soon...)

SCM Common Definitions

SCM Principals and Best Practices

Test Last development: A primer for unit testing of legacy code.

Test Last development: A primer for unit testing of legacy code.

Test driven development or TDD is a proven and effective means of software development but it assumes you are beginning with a blank slate. What is not understood about TDD is that some of the approaches when implemented against legacy code during Test Last Development or TLD may potentially harm your code base.

If you have ever developed against legacy code then you have no doubt come across applications that were so tightly coupled that refactoring of the original code was ruled entirely off limits. This document will attempt to provide you with best practices around creating unit tests for legacy code, with the goal of enabling your group to progress toward developing the code coverage you need to have confidence in your legacy code base.

Before you begin

While creating tests for legacy code is important to future development, it is even more important to take care when creating tests for legacy code that is already in a production environment. The following items should be considered before writing any new tests.

What are the unknown dependencies or prerequisites? There are often uncovered dependencies or prerequisites that methods assume exist and you will need to carefully track these. The most obvious and easiest to uncover involves references to 3rd party code libraries, the code will not compile if these are not present, but dependencies can also mean configuration settings that must be set up before testing can occur. You should understand configuration settings that must be in place for the tests to work.

Are there specific environment variables and local only simulations required? You may need to simulate the production environment in order to run the legacy code. This can involve changing connection strings and updating configuration files so you can simulate your production environment. You’ll need to document and track these dependencies carefully because you want to ensure they don’t get checked in to the code base. You’ll also want to ensure you comment them in your tests.

Can you change code? In TDD you create unit tests as you go which means that testability is an inherent consideration, in Test Last Development however, testability is rarely a consideration. It is necessary to understand and get agreement on whether or not you can make changes to the code as you are creating your unit tests. For instance, if you are creating test cases between builds, it may not be possible to change the code and you’ll need to work within that framework. If however you are between versions, you should get consensus to make small changes to the code.

Is the code testable in the current state? Does the code consist of large methods that do many complex tasks, or are there a large number of subroutines that are not easily testable? Often times making code more testable involves breaking up large complex methods into many smaller or “chunkier” ones. Sometimes you will see code full of subroutines, which return void and are for the most part un-testable by traditional TDD standards. You will need to find ways of ensuring code compatibility along the way, one way of doing this is to keep the original method signature the same.

Focus on what is possible rather than what isn’t. Don’t get caught up in the fact that you can’t test every line of code, in fact, even when practicing Test Drive Development it isn’t always possible to achieve 100% code coverage. Just start with what you can reasonably test and go from there. Once you begin creating your first tests, you’ll find yourself identifying and prioritizing code that was previously thought un-testable.

Creating your tests

Picking a component to test. For legacy code, you want to start by identifying the largest and most sweeping test you can make that will give you the most “bang for the buck”. This may mean identifying one single method that does the most critical job in the component, but it can also mean identifying a method or object that is most critical or commonly used by an application. If you are testing a stand-alone application, you may want to start with the Main method, since a failing Main will cause the entire application to break. If you are testing a library, you will want to identify the chief function the library performs and test that functionality first. This may be the factory methods or conversion methods that define the library’s purpose.

Create Fixtures Once you have created your first test, you will begin to identify a common framework for creating other tests, and hopefully identified common dependencies. Fixtures are the set up and tear down methods that run in your test harness before and after your unit tests. The setup method prepares the environment for your tests and the teardown methods perform cleanup.

Cover as many tests as possible. The TDD method mandates that you only write enough code to make a test pass, fail then pass again testing method boundaries as you go. When you are dealing with thousands of lines of legacy code however, the TLD method mandates you get as much coverage as possible in the shortest amount of time. In TLD your methods are already written and many times already in production so initially you only want to focus on passing tests, boundary testing at this point should wait until you are ready to re-factor the legacy code.

Debugging and re-factoring your code. If the legacy system is proven, most of your tests will pass initially, however any time you are creating tests for a previously untested code base you are likely to find a few bugs hidden in the code. When this happens you will need to decide whether or not to fix the bugs. The TDD’s “clean as you go” methodology says to re-factor until the test passes, with legacy code this isn’t a good practice. You may introduce bugs into the rest of the code base that go unnoticed due to the lack of coverage. Many times you don’t have the budget or the time to re-factor legacy code and many times the mandate will be to keep legacy code as it is. Whenever you re-factor legacy code, you’ll want to avoid it as much as possible during the test creation process or keep it to an absolute minimum.

Testing top-down functionality. The TDD programming requires you to analyze the task in hand using top-down problem decomposition and come up with a list of tests based upon those scenarios. Legacy code has a defined set of tasks, so you can start immediately writing your tests at the highest level rather than looking at individual methods. You may find you are writing too many tests and need to move some of your tests into the fixtures later but that’s ok. Initially try to take note of what the application is doing and write tests for each thing it does first, and move shared tests into fixtures later.

Considering alternate pathways through the code. Try to organize up your tests into cohesive groups. Think of testing related functionality and divide this by libraries or packages, then classes, then methods and finally lines of code. In other words, you start writing tests for critical functionality in a single library that does some function. Start with one critical class in that library with a goal of writing tests for every class in that library, and every method in each class in the library and eventually with the ultimate goal of having coverage for each line of code in each method.

Testing against the data store. Many times you will find numbers of subroutines that return void. While this may seem counterintuitive, you’ll still need to find creative ways of testing that these subroutines do what they promise. Sometimes these subroutines change state of objects and sometimes they are linked to data operations like executing stored procedures or inserting data into a data store. While TDD methodology insists that you mock up objects or separate the data store from the tests entirely, this is not always possible or practical in legacy code without a major refactoring and as stated earlier, we want to avoid refactoring until we have sufficient code coverage to ensure success. This may mean you will need to test directly against a data store until you have enough code coverage to begin refactoring or mocking up data. This may make tests more complicated but ignoring methods returning void and depending on preexisting data is not an option

Don’t depend on preexisting data. Depending on pre-existing data in a data store is not desirable and may yield false results. If you rely on pre-existing data is deleted or changes from the expected state, all of your tests that depend on that data will yield false results. If you need to test against a data store, you can use Fixtures to insert or prepare the data in the correct state and remove or reset the data after the test is finished. Preparing your data first will ensure you always get the results you expect.

Testing dead methods. During the process of testing you will find that you come across dead methods or methods whose functionality is no longer needed. It may surprise you to see how much of your code is no longer needed. If you come across code that is suspect you can easily verify this with TFS by searching for all references of that code. Once you have verified the code is dead you can safely remove it. The less code you have to maintain, the better off you and everyone responsible for the code will be.

In Conclusion

Don’t worry about achieving 100% code coverage, think of the coverage percentage as a moving target, after all, each time the component is changed or a method is added your percentage will also change. Pick a target number that will confidently allow you to re-factor the legacy codebase, and work your way up, ensuring you have covered the critical functionality in each component along the way. Once you have reached your goal number, confidence in your code will increase and the amount of time you spend looking for bugs in the wrong places will decrease significantly.

Unity Container

If you just read my page on Dependency Injection you may be asking yourself...

Why use a Unity container instead of a factory?

There are a lot of reasons why you might consider using containers in your application instead of factories, not the least of which is that Factories are Type specific. Factories produce related types of objects while Containers can hold more than one type of object or service or a combination of both.

Configuration: A container’s services can be created either declaratively (via configuration files) or .Net base attributes. Your consuming Objects don’t need any knowlede of how to construct services.

The number types is often hardwired into Factories while Containers can produce any number of different types.

Containers provide services which can be used across various applications in an enterprise without having to embed specific loging into an applciation. In .Net, for instace, the way containers can manage lifetimes for you, enables you to create your containers on Application start and make them available application wide.

Unit Testing. The loose coupling is ideal for unit testing, if you use composition in your programming model, you have the option of creating test containers with Mock data which your objects can execute against instead of depending on hitting production systems..



Unity Container Example:

Creating a basic unity container is quite simple, you need the Class that you want to create and the Interface it provides. In this case we want our unity container to hold a FooModel. FooModel has provided several public methods to us through the IfooModel interface. Creating our unity container goes something like the example below.

///

/// Handles the initialization of the test cases.

///


[TestInitialize]

public void Init()

{



IUnityContainer container = new UnityContainer()

.RegisterType();



this.unityContainer = container;

this.model = this.unityContainer.Resolve();

}

The example above will be available to each of the test methods in our test class.





Say however that we want our container to manage a singleton for us and the FooModel now takes a dependency on the constructor. The example below will turn our FooModel into a singleton and will handle injecting dependencies for us.

///

/// Handles the initialization of the test cases.

///


[TestInitialize]

public void Init()

{

/// Makes FooModel a singleton

IUnityContainer container = new UnityContainer()

.RegisterType(new ContainerControlledLifetimeManager());



/// Inject a FooDependencyObject.

container.Configure()

.ConfigureInjectionFor(new InjectionConstructor(new FooDependencyObject()));



this.unityContainer = container;

this.model = this.unityContainer.Resolve();

}



Simple and you can see that the .Resolve method will produce our Foo Singleton.


Simple and you can see that the .Resolve method will produce our Foo Singleton.

There are two types of lifetime managers, ContainerControlledLifetimeManager and ExternallyControlledLifetimeManager.

ContainerControlledLifetimeManager will implement a singleton, returning the same instance of the object each time you call .Resolve or .ResolveAll

ExternallyControlledLifetimeManager will return the same instance of the object, however the container holds a “weak reference” to the object. Basically, the object is subject to garbage collection when it goes out of scope.

If you don’t name a lifetime manager, you will get a new object each time.

Service Locator

IF you just read my page on Unity Containers , you may be wondering....
What is a Service Locator all about?
Well, it’s a container too but it doesn’t instantiate anything or manage any lifetimes for you.
For those of you who work on projects that favor inheritance over composition, It offers you decoupling with concrete types. It doesn’t handle instantiation or manage lifetimes for you but its’ still better than a factory.
Say, for example you have a class that depends on two different services whose type is specified at design time, your dependent class must know how to construct each of those services.
A factory still won’t work because Service A and Service B are two different types of objects.
What a Service Locator does is hold a reference to Service A and Service B for you so your dependent class only needs to know how to call the Service Locator and pass in the Type of Service it needs.
Your dependent class calls the Service Locator, passing in the Type of service it needs and the Service Locator will return an instance of the Service.
You can actually combine Service Locator and Unity Container by putting a Unity Container inside a Service Locator. It’s not the most practical thing I’ve heard of but if you run into a situation where you find yourself with two separate Unity Containers sometime in the future.
Unity Container in a Service Locator?
I actually used this approach for when I was refactoring using Test Last Development. I knew that my classes would be refactored for composition from inheritance and that I would use dependency injection at some point, but I wasn’t quite sure how many things I would inject so I just injected a Service Locator with a Unity Container full of all of my services objects inside, next I made the tests pass, and finally I was able to gradually abstract the data, create the interfaces and then just got rid of the Service Locator alltogether.
Refactoring is an entirely different subject, and you can read my page on Test Last Development if you would like more information on my take.






So what does Service Locator look like?
Well, first the bad news is that if you want to inject a unity container you have to create a custom service locator. Yep, you have to create your own by inheriting from ServiceLocatorImplBase. Here’s an example of a service locator. You’ll need to override methods as need in your implementation.
using Microsoft.Practices.ServiceLocation;
using Microsoft.Practices.Unity;

public class MyServiceLocator : ServiceLocatorImplBase
{
private IUnityContainer container;

public MyServiceLocator(IUnityContainer container)
{
this.container = container;
}
protected override object DoGetInstance(Type serviceType, string key)
{
return this.container.Resolve(serviceType, key);
}

protected override IEnumerable DoGetAllInstances(Type serviceType)
{
return this.container.ResolveAll(serviceType);
}
}

Next, you’ll need a unity container to add to the service locator.
///
/// Handles the initialization of the test cases.
///

[TestInitialize]
public void Init()
{
/// Makes FooModel a singleton
var container = new UnityContainer()
.RegisterType(new ContainerControlledLifetimeManager());

/// Inject a FooDependencyObject.
container.Configure()
.ConfigureInjectionFor(new InjectionConstructor(new FooDependencyObject()));

var myServiceLocator = new MyServiceLocator(container);

}

That’s it, your Service Locator has a Unity Container inside.
If you want an instance of a class in your Unity Container, you just ask your Service Locator for it.
this.myServiceLocator.GetInstance().DoStuff();

Regards -c

Dependency Injection / Inversion of Control Patterns

Unity Application Block.
Unity with Service Locator
From MSDN, “The Unity Application Block (Unity) is a lightweight, extensible dependency injection container”
What is a Dependency Injection Container Anyway?
For those of you who work on projects that follow a composition model over an inheritance model, containers are an ideal solutin. A DI container, sometimes referred to as an inverson of control container or IOC container is a container which “contains” and manages some type of object abstraction. The container takes care of instantiations, injecting dependencies, sometimes singleton lifetime management, as well as supplying “cross-cutting” services to objects being hosted inside the container.
From MSDN, “A cross-cutting service is defined as a service that is generic enough to be applicable across different contexts, while providing specific benefits. An example of a cross-cutting service is a logging framework that logs all method calls within an application.”
Inversion Principal.
Inversion referrs to inverting or “filpping” the way you think about your program design in an object oriented way. If your programming model interrupts the normal control flow then you are probably using some sort of Inversion principal in your design.
Inversion can occur in Event-driven programming in the case of say custom ASP.Net user controls, the “bubble up” of events from the user control to the page can be thought of as a form of Inversion because it “reverses” the normal control flow which would be the page calling the control and “observing” an event on the custom control by registering and event handler on the calling page.
Another form of inversion is Dependency Injection. This is one of my favorite patterns and was first coined by Martin Fowler. Basically, if an Object needs access to a another object (we’ll call it service for this text), the object takes on the task of acquiring the service in one of 3 ways:
· Interface injection: The Service provides an interface which consumers must implement. The interface exposes specific behaviors at runtime.
· Setter injection: The dependent Object exposes a “Setter” method to inject the dependency.
· Constructor injectin: Dependencies are injected through the class constructor.
Inversion takes yet another form with the use of the Factory Pattern. In the factory pattern, the Factory takes care of any initialization steps for that object. The factory also takes care of producing different kinds of objects for the Dependend Object. The Dependent Object therefore no longer hast to depend on each type of concrete object, knowing how to construct each one, and instead looks to the factory to take care of these repetative tasks for it, thereby inverting control.

Using a Unity Container with Service Locator for Test Driven Development

So how do I use a unity container for Test Driven Development?
This is the really cool part of DI containers, since you’re using Dependency Injection via Interfaces, you can change behavior at runtime.
Since I’m currently working on an ASP.Net MVC prototype I’m going to demonstrate Unity Testing a Controller which uses Dependency Injection.
In my case I’m using ASP.Net MVC controllers that have a dependency on certain actions. I wanted to keep the controller as clean as possible so I created an Iaction interface.
Let’s call it HomeController for now.

///
/// HomeController class.
///

public class HomeController : SecuredBaseController
{
///
/// Search Action Interface.
///

private ISearchAction searchAction;

///
/// Constructor for Home Controller.
///

/// Home page search action.
public HomeController(ISearchAction searchAction)
{
this.searchAction = searchAction;
}

///
/// Load view Index.
///

/// Action Result.
public ActionResult Home()
{
ViewData["Title"] = "Summary";
ViewData["Message"] = "Welcome to ASP.NET MVC!";

//// get valid review status
SubmissionStatus[] submissionStatusList = this.searchAction.GetSubmissionStatusForAuditor();

SelectList reviewStatusList = new SelectList(submissionStatusList, "Id", "InternalDisplayName");
ViewData["ReviewStatusId"] = reviewStatusList;

//// primary countries
CountryRegion[] countryList = this.searchAction.GetCountries();
SelectList countrySelectList = new SelectList(countryList, "Id", "Name");
ViewData["CountryId"] = countrySelectList;

//// get programs
Program[] programs = this.searchAction.GetFilteredProgramList();
SelectList programSelectList = new SelectList(programs, "Id", "Name");

ViewData["ProgramId"] = programSelectList;

//// get activity types
Item[] activityTypes = this.searchAction.GetActivityTypesForAuditor();

SelectList activityTypesSelectList = new SelectList(activityTypes, "Id", "Name");

ViewData["ActivityTypeId"] = activityTypesSelectList;

return View("ViewPoeSubmissions");
}

Ok so you can see that we have a search page that calls an Search Action interface with all the code contained in a SearchAction class. The SearchAction class is in a, you guessed it.. Unity Container. I won’t go into the factory class that creates the unity container right now, I’ll write another article on that item but for now, the Unity container is actually created on Application_Start() and looks something like: ControllerBuilder.Current.SetControllerFactory(UnityContainers.Instance.Resolve());
So how do we test this? As I mentioned above, the untiy container is created on Application_Start() for our web site but since the controller takes an Interface, I can mock that up using a Repository pattern in the Unit Tests.
Here is what the SearchAction class looks like:
///
/// Search action class for the controller.
///

public class SearchAction : IPoeSearchAction
{
///
/// Taxonomy model interface.
///

private ITaxonomyModel taxonomyModel;

///
/// Activity request model interface.
///

private IRequestModel requestModel;

///
/// Search Action.
///

/// Taxonomy Model Interface.
/// Request Model Interface.public SearchAction(ITaxonomyModel taxonomyModel, IRequestModel requestModel)
{
this.taxonomyModel = taxonomyModel;
this.requestModel = requestModel;
}


Notice the constructor for Search Action takes 2 interfaces, ItaxonomyModel and IrequestModel.
The GetCountries() method of the SearchAction class looks like this:
///
/// Get list of primary countries.
///

/// Country region collection.
public CountryRegion[] GetCountries()
{
// Grab the Instance from Activity Request Process.
return this.taxonomyModel.GetCountries();
}

It’s actually the TaxonomyModel class that calls the web service and takes care of the GetCountries() code so I can create a class called TestTaxonomyModel that implements the ItaxonomyModel interface and in that test class I can put mock code in the GetCountries method that will return a subset of Test Data.





The RequestModel class works the same way.
///
/// Search for a Claim.
///

/// Form Collection.
/// Activity Based Claim Search Result.
public ClaimSearchResult Search(FormCollection form)
{

/// Create some search criteria from the form data.
ClaimSearchCriteria criteria = this.CreateSearchCriteria(form);

return this.requestModel.SearchClaims(searchCriteria);

}
I can create a class called TestRequestModel that implements the IRequestModel interface and put logic to return test data based upon the search criterion passed in.
Here is what my unit test looks like. First I initialize a unity container with test data:
///
/// Internal chip service locator.
///

private IServiceLocator serviceLocator;

///
/// Unity container.
///

private IUnityContainer unityContainer;

/// Initialize tests.
///
TestInitialize]
public void InitializeTests()
{

this.unityContainer = new UnityContainer()
.RegisterType()
.RegisterType();

this.unityContainer.Configure()
.ConfigureInjectionFor(new InjectionConstructor(new object[] { new TestTaxonomyModel(), new TestRequestModel() }))

.ConfigureInjectionFor(new InjectionConstructor(this.unityContainer.Resolve()));

unityContainer.Resolve();

this.serviceLocator = new MyServiceLocator(this.unityContainer);

}
///
/// Search test method.
///

[TestMethod]
public void SearchTest()
{
var homeController = new HomeController(this.serviceLocator.GetInstance());

FormCollection form = new FormCollection();

form["ProgramId"] = "3";
form["ActivityId"] = "8";
form["CountryId"] = "33";
form["Filter"] = "3";
form["Region"] = "Foo";

searchController.Search(form);

}
There – you can see it is above, we call the Search method on the search controller psssing it the form we created and the code will be applied against our test controllers.

Regards,
-c

Using a Unity Container with Service Locator for Test Driven Development

Read more about this article at my web site.

If you read my page on Unity Container and Service Locator, the page on Using Unity and Service Locator for Unit Testing brings them together.



So how do I use a unity container for Test Driven Development?



This is the really cool part of DI containers, since you’re using Dependency Injection via Interfaces, you can change behavior at runtime.



Since I’m currently working on an ASP.Net MVC prototype I’m going to demonstrate Unity Testing a Controller which uses Dependency Injection.



In my case I’m using ASP.Net MVC controllers that have a dependency on certain actions. I wanted to keep the controller as clean as possible so I created an Iaction interface.



Let’s call it HomeController for now.





/// <summary>



/// HomeController class.



/// </summary>



public class HomeController : SecuredBaseController



{



/// <summary>



/// Search Action Interface.



/// </summary>



private ISearchAction searchAction;





/// <summary>



/// Constructor for Home Controller.



/// </summary>



/// <param name="searchAction">Home page search action.</param>



public HomeController(ISearchAction searchAction)



{



this.searchAction = searchAction;



}





/// <summary>



/// Load view Index.



/// </summary>



/// <returns>Action Result.</returns>



public ActionResult Home()



{



ViewData["Title"] = "Summary";



ViewData["Message"] = "Welcome to ASP.NET MVC!";





//// get valid review status



SubmissionStatus[] submissionStatusList = this.searchAction.GetSubmissionStatusForAuditor();





SelectList reviewStatusList = new SelectList(submissionStatusList, "Id", "InternalDisplayName");



ViewData["ReviewStatusId"] = reviewStatusList;





//// primary countries



CountryRegion[] countryList = this.searchAction.GetCountries();



SelectList countrySelectList = new SelectList(countryList, "Id", "Name");



ViewData["CountryId"] = countrySelectList;





//// get programs



Program[] programs = this.searchAction.GetFilteredProgramList();



SelectList programSelectList = new SelectList(programs, "Id", "Name");





ViewData["ProgramId"] = programSelectList;





//// get activity types



Item[] activityTypes = this.searchAction.GetActivityTypesForAuditor();





SelectList activityTypesSelectList = new SelectList(activityTypes, "Id", "Name");





ViewData["ActivityTypeId"] = activityTypesSelectList;





return View("ViewPoeSubmissions");



}





Ok so you can see that we have a search page that calls an Search Action interface with all the code contained in a SearchAction class. The SearchAction class is in a, you guessed it.. Unity Container. I won’t go into the factory class that creates the unity container right now, I’ll write another article on that item but for now, the Unity container is actually created on Application_Start() and looks something like: ControllerBuilder.Current.SetControllerFactory(UnityContainers.Instance.Resolve<IControllerFactory>());



So how do we test this? As I mentioned above, the untiy container is created on Application_Start() for our web site but since the controller takes an Interface, I can mock that up using a Repository pattern in the Unit Tests.



Here is what the SearchAction class looks like:



/// <summary>



/// Search action class for the controller.



/// </summary>



public class SearchAction : IPoeSearchAction



{



/// <summary>



/// Taxonomy model interface.



/// </summary>



private ITaxonomyModel taxonomyModel;





/// <summary>



/// Activity request model interface.



/// </summary>



private IRequestModel requestModel;





/// <summary>



/// Search Action.



/// </summary>



/// <param name="taxonomyModel">Taxonomy Model Interface.</param>



/// <param name="requestModel">Request Model Interface.</param>public SearchAction(ITaxonomyModel taxonomyModel, IRequestModel requestModel)



{



this.taxonomyModel = taxonomyModel;



this.requestModel = requestModel;



}







Notice the constructor for Search Action takes 2 interfaces, ItaxonomyModel and IrequestModel.



The GetCountries() method of the SearchAction class looks like this:



/// <summary>



/// Get list of primary countries.



/// </summary>



/// <returns>Country region collection.</returns>



public CountryRegion[] GetCountries()



{



// Grab the Instance from Activity Request Process.



return this.taxonomyModel.GetCountries();



}





It’s actually the TaxonomyModel class that calls the web service and takes care of the GetCountries() code so I can create a class called TestTaxonomyModel that implements the ItaxonomyModel interface and in that test class I can put mock code in the GetCountries method that will return a subset of Test Data.













The RequestModel class works the same way.



/// <summary>



/// Search for a Claim.



/// </summary>



/// <param name="form">Form Collection.</param>



/// <returns>Activity Based Claim Search Result.</returns>



public ClaimSearchResult Search(FormCollection form)



{





/// Create some search criteria from the form data.



ClaimSearchCriteria criteria = this.CreateSearchCriteria(form);





return this.requestModel.SearchClaims(searchCriteria);





}



I can create a class called TestRequestModel that implements the IRequestModel interface and put logic to return test data based upon the search criterion passed in.



Here is what my unit test looks like. First I initialize a unity container with test data:



/// <summary>



/// Internal chip service locator.



/// </summary>



private IServiceLocator serviceLocator;





/// <summary>



/// Unity container.



/// </summary>



private IUnityContainer unityContainer;





/// Initialize tests.



/// </summary>



TestInitialize]



public void InitializeTests()



{





this.unityContainer = new UnityContainer()



.RegisterType<IController, HomeController>()



.RegisterType<ISearchAction, SearchAction>();





this.unityContainer.Configure<InjectedMembers>()



.ConfigureInjectionFor<SearchAction>(new InjectionConstructor(new object[] { new TestTaxonomyModel(), new TestRequestModel() }))





.ConfigureInjectionFor<HomeController>(new InjectionConstructor(this.unityContainer.Resolve<ISearchAction>()));





unityContainer.Resolve<IController>();





this.serviceLocator = new MyServiceLocator(this.unityContainer);





}



/// <summary>



/// Search test method.



/// </summary>



[TestMethod]



public void SearchTest()



{



var homeController = new HomeController(this.serviceLocator.GetInstance<SearchAction>());





FormCollection form = new FormCollection();





form["ProgramId"] = "3";



form["ActivityId"] = "8";



form["CountryId"] = "33";



form["Filter"] = "3";



form["Region"] = "Foo";





searchController.Search(form);





}



There – you can see it is above, we call the Search method on the search controller psssing it the form we created and the code will be applied against our test controllers.





Regards,



-c