Three pitfalls to avoid when writing a response filter by Matt Wrock

I was looking at a response filter that someone else had written yesterday and I noticed a few things it was doing that ideally you want to avoid in a response filter. This is a particularly fresh topic for me since I am nearing the end of V1 development on a response filter that will automatically find a response's css, merge them, find their background images, sprite the ones it can then create a new and minified css with these sprites. I'll be blogging much more on that next month.

Now, in order to write a good filter that will work with any site and be performant is not particularly easy. If your filter is limited to a small or smaller sites, this advise may be considered to lie in the category of preoptimization. But real quick...before I elaborate on these pitfalls...

What is a response filter?

A response filter is simply a class that derives from System.IO.Stream. This class is attached to an HttpResponse's Filter property like so:

Response.Filter = new MyFilter(HttpContext.Current.Response.Filter, 
    HttpContext.Current.Response.ContentEncoding);

As the underlying response outputs to its OutputStream, this output is sent to the filter which has the opportunity to examine and manipulate the response before it gets to the browser. The filter does this by overriding Stream's Write method:

void Wite(byte[] buffer, int offset, int count);

When the filter is ready to send its transformed response to the browser or just forward the buffer on unchanged, it then calls the underlying stream's write method. So your filter might have code like this:

        public ResponseFilter(Stream baseStream, Encoding encoding)
        {
            this.encoding = encoding;
            BaseStream = baseStream;
        }

	protected Stream BaseStream { get; private set; }

        public override void Write(byte[] buffer, int offset, int count)
        {
		var header = encoding.GetBytes("I am wrapping");
		var footer = encoding.GetBytes("your response");
		BaseStream.Write(header, 0, header.Length);
		BaseStream.Write(buffer, offset, count);
		BaseStream.Write(footer, 0, footer.Length);
         }

This is a common implementation used for adding compression to a site or ensuring that a site's content is always wrapped in a common header and footer.

So with that background, here are some things to try and avoid in a solid filter: 

Assuming UTF-8

This is easy to overlook and honestly it will work most of the time, but if you think that your filter will ever be dropped on a Japanese website, or a website that is intended to be localized to a double byte unicode locale you might be disapointed. Very disapointed. Avoid doing something like this:

BaseStream.Write(encoding.GetBytes("I am wrapping"), 0, 
    "I am wrapping".Length);

In a Japanese locale, the underlying encoding will be unicode and the length of the byte array will be twice the size of  "I am wrapping".Length which is likely UTF-8. So the users see just half the stream. But thats ok, the first half was way better.

Copying the buffer to a string

 You might be tempted to do something like this:

public override void Write(byte[] buffer, int offset, int count)
{
    var output = encoding.GetString(buffer, offset, count)
    var newOut = encoding.GetBytes("header" + output + "footer");
    BaseStream.Write(newOut, 0, newOut.Length);
}

You have now managed to double the memory footprint of the original response by copying it to a new variable. This can be a sensitive issue with filters since they often process almost ALL output in a site. Unfortunately, if you need to do alot of text searching and replacing on the original byte array and you want to be efficient, this can be difficult and tedious code to write, read and test. I intend to devote a future post to this topic exclisively.

Ignoring the offset and count parameters

You might think that using the offset and count parameters in your Write override is not necessary. After all, you are confident that your transformations can go to the browser as is because you don't have any code that would need to do further processing on the buffer. Well maybe you don't but someone else might. You may have no control over the fact that someday another HttpModule will be added to the site that registers another filters. Response filtering fully supports the ability to chain several filters together. Someone elses module might have have the code mentioned above in their own class:

Response.Filter = new MyFilter(HttpContext.Current.Response.Filter, 
    HttpContext.Current.Response.ContentEncoding);

So if this is called after your own filter was added to the response, then YOU are HttpContext.Current.Response.Filter. That new filter might do something like:

public override void Write(byte[] buffer, int offset, int count)
{
    int[1] headBoundaryIndexes = FindOpeningHead(buffer, offset, count);
    BaseStream.Write(buffer, 0, headBoundaryIndexes[0]);
    BaseStream.Write(anEvenBetterHead, 0, anEvenBetterHead.Length);
    BaseStream.Write(buffer, headBoundaryIndexes[1], (offset + count) - headBoundaryIndexes[1]);
}

So if your filter is this filter's BaseStream and your Write looks like this:

public override void Write(byte[] buffer, int offset, int count)
{
    var output = Encoding.UTF8.GetString(buffer);
    var newOut = output.Replace("super", "super duper");
    BaseStream.Write(Encoding.UTF8.GetBytes(newOut), 0, newOut.Length);
}

Ouch. Your users are probably looking at something other than what you intended. The upstream filter was trying to replace the head but now they three. After several years in the industry and meticulous experimentation, I have found that 1 is the perfect number of heads in a web page.

Oh and look, this code managed to violate all three admonitions in one blow.

Nuget and Powershell WebAdministration Module Issues by Matt Wrock

I’ve been working on getting an authentication library  packaged using nuget. The library uses an X509 certificate to encrypt and decrypt auth ticket data stored in a cookie. So to make it easy to get the library up and running, I wanted to create an install.ps1 script that would install the x509 cert in the correct location of the cert store. In order for the library to access the certificate, the account under which the hosting web application runs needs to have read permissions on the cert. I need to figure out which account that is and then use winhttpcertcfg.exe to grant that account the appropriate permissions.

I had assumed that this would simply involve loading the WebAdministration powershell module and then accessing its cmdlets to query which website has a physical path matching the one of the project that is referencing  my package. Then find its application pool’s process model to determine which account the application runs under.

When I began testing this script from within the nuget package manager console, I started getting an unexpected error on the first call into any cmdlet of the WebAdministration module:

Get-Website : Retrieving the COM class factory for component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed d
ue to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At line:1 char:12
+ GET-WEBSITE <<<<
    + CategoryInfo          : NotSpecified: (:) [Get-Website], COMException
    + FullyQualifiedErrorId : System.Runtime.InteropServices.COMException,Microsoft.IIs.PowerShell.Provider.GetWebsite
   Command

After spending some time researching this, I discovered that nuget uses the 32 bit powershell.exe shell and that the module is looking for com classes not registered in the wow64 registry on a 64 bit system. This proved to be very frustrating and I was wondering if discovering the IIS user account would not be possible in my nuget install.

After more research, I discovered that I could get to what I needed using the .net Microsoft.Web.Administration assembly. While probing this assembly is not quite as friendly and terse as the WebAdministration module, it met my needs perfectly. Here is the powershell script that determines the application pool identity:

 

param($installPath, $toolsPath, $package, $project)

function GetAppPoolAccount([string] $webDirectory){    [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Administration")    $iis = new-object Microsoft.Web.Administration.ServerManager    $account = "NetworkService"    $site = $iis.sites | where-object {$_.state -eq "Started"}|         foreach-object {$_.applications} | where-object {            $_.virtualdirectories -contains (                foreach-object {$_.virtualdirectories} | where-object {                    $_.physicalpath -eq $webDirectory                }            )        }    if($site)    {        $poolName = $site.applicationPoolName        $pool = $iis.applicationpools | where-object{$_.name -eq $site.ApplicationPoolName}        if($pool.processModel.identityType -eq "SpecificUser"){            $account = $pool.processModel.userName        }        elseif($pool.processModel.identityType -eq "ApplicationPoolIdentity"){            $account = "IIS APPPOOL\$poolName"        }        else{            $account = $pool.processModel.identityType        }    }    return $account}

. GetAppPoolAccount -webDirectory (Get-Item $project.FullName).Directory.$toolsPath\InstallAuthCerts $toolsPath "INT" $account

Essentially the Microsoft.Web.Administration assembly exposes the IIS applicationHost.config data in the same format that it exists in xml. My script loads the assembly, instantiates the ServerManager class and iterates its members to get the information I need. I then call into my InstallAuthCerts script with the path of the cert, the environment, and the account name which will install the cert and grant the appropriate permissions to $account.

Now if someone can point out how to use the WebAdministration module from a nuget console, I’d be grateful for that information. I dug around quite a bit and from what I can tell, it can’t be done. At least not on  a 64 bit machine.

Getting TransactionScope to play nice with NHibernate by Matt Wrock

My team is beginning to integrate NHibernate into a fairly large code base that makes frequent use of TransactionScope to ensure that the saving of related entities succeed or fail together. My own NHibernate experience has been primarily on green field projects and on existing projects where the NHibernate work was well isolated from existing plain ADO calls. Furthermore, my use of transactions has typically been from within stored procedures and I have never used any of the transactional functionality exposed by the ADO APIs. So this strange thing called TransactionScope was an entirely new concept for me to understand.

Now before I elaborate on the drama and adventure that accompanied this learning experience and some of the failures that ensued, I’ll quickly jump to the moral of this story for those of you that simply want to know what is the right way to get NHibernate transactions to participate in a transaction defined by TransactionScope. Simply stated: its all in the nesting. Instantiate a TransactionScope on the outside and and begin and commit/rollback an NHibernate transaction on the inside.

using(var scope = new TransactionScope()){    SaveEtitiesNotManagedByNH();    using (var transaction = session.BeginTransaction())    {        session.SaveOrUpdate(entity);        transaction.Commit();    }    scope.Complete();}

What is happening here? Instantiating a new TransactionScope creates an “ambient” transaction.  What the heck is an ambient Transaction? An ambient transaction is part of the implicit programming model provided by the Systems.Transactions namespace. The msdn documentation advises developers to use this model instead of creating their own explicit transactions. The TransactionScope is ThreadStatic and therefore all ADO operations on that thread will participate in this ambient transaction.

There are some optional constructor arguments that can be used when creating a new TransactionScope that controls whether to create a brand new ambient transaction or to participate in one that already exists. For example, in the above block, it is possible that SaveEtitiesNotManagedByNH calls into methods that also have using blocks around a new TransactionScope. If new TransactionScopes are created with the default constructor, those scopes will participate in the same transaction as the one created by the outermost  TransactionScope. See the MSDN documentation for details on creating new TransactionScopes.

Since NHibernate 2.1 onward, the NHibernate transactions will participate in an ambient transaction if one exists. So in the code block above, all NHibernate  persistence logic is a part of the same transaction as any non NHibernate persistence code since it all occurs within the same TransactionScope.

When scope.Complete() is called, it simply sets a bit on the TransactionScope that claims that everything within the scope succeeded. When the TransactionScope is disposed at the end of the using block (or the outermost block if there are nested scopes), the transaction is commited as long as Completed was called. If completed was never called, the transaction is rolled back.

TransactionScope/NHibernate Pitfalls

So now that we have looked at the right way to manage the relationship between the  TransactionScope and NHibernate Transaction, lets look at some other techniques that might seem like a good idea but I assure you are bad.

Following a Single Transaction per Request Unit of Work pattern

Our original NHibernate implementation was following a Unit of Work pattern. This is a common and very nice pattern that typically involves opening an NHibernate session upon the first call into NHibernate and then closing the session in the EndRequest method of global.asax or a custom HttpModule. Our infrastructure took this one step further and opened a transaction after creating the session and committed or rolled it back in EndRequest. For now lets kindly suspend our disbelief in bad transaction management and pretend that this is a good idea because it supports a simple model where the developer can assume that all activity in a single request will succeed or fail together.

Following this technique, it is likely that an NHibernate explicit transaction will be created before an Ambient transaction exists.

session.GetEntityById(id);

using(var scope = new TransactionScope()){    SaveEtitiesNotManagedByNH();     session.SaveOrUpdate(entity);     scope.Complete();}

Here, NHibernate is used to retrieve an entity which, in our framework, causes a Transaction to begin. Therefore the non NHibernate activity is in a completely separate transaction. When the ambient transaction is created, it will look for the presence of an existing ambient transaction. There will be none and a completely new and separate transaction will be created.  As a result, operations in these two contexts will have no visibility of uncommited data in the other.

To remedy this, we created a TransactionScope in BeginRequest and stored it in HttpContext. On EndRequest, we would retrieve that TransactionScope and call Completed and dispose of it. This way we are guaranteed that the ambient transaction  exists before BeginTransaction is called on the NHibernate transaction and we can be assured that the NHibernate operations will enlist in that transactions.

This would fail randomly because in ASP.NET, there is no guarantee that BeginRequest and EndRequest will occur on the same thread. A Transaction scope will throw an exception if you try to dispose of it on a different thread than the one where it was created. So in the occasional event that EndRequest executed on a different thread then BeginRequest, this exception was thrown.

I then tried using PreRequestHandlerExecute and PostRequestHandlerExecute instead of BeginRequest and EndRequest. Those will always occur on the same thread. It appeared that this was a working solution and worked in our dev environments. However, when we moved this to an integration environment, we began seeing database timeout errors. Reviewing the active connections in the database, spids were blocking and the spid at the top of the blocking chain was holding an open transaction. What was particularly odd was that this spid was locking a resource that we thought was being inserted outside of a TransactionScope or any explicit transaction.

It ended up that the reason this was happening is that there are certain circumstances where PostRequestHandlerExecute is never called. For example on a Response.Redirect where endRequest is false. Under such circumstances, the TransactionScope is not disposed. Since the TransactionScope is in Thread Local Storage, it remains tied to the request’s thread even after the request ends. The thread returns to the worker pool and later gets pulled by another thread. In our case, this was a background thread that typically never used explicit transactions and simply inserted a row into a commonly queried table. After the background thread finished its work, since it never calls Commit, the table remains locked and simple queries by other requests on that table time out. To summarize this unfortunate series of events:

  1. Web request begins and creates a new TransactionScope.
  2. The Request returns a 302 redirect and PostRequestHandlerExecute does not execute and therefore the TransactionScope is not disposed and remains in thread al storage.
  3. Thread returns to the worker pool.
  4. Background thread is launched from the worker pool and thread with undisposed TransactionScope is used.
  5. Thread inserts row into Table A.
  6. Thread exits and still does not call dispose and the transaction remains open.
  7. A new web request queries Table A and waits for the open transaction to release its exclusive lock.
  8. The new request times out since the lock is never released as the thread that owns the locking transaction is sitting idle in the thread pool.

This is a nasty bug that thankfully never reached production. These kinds of bugs are always difficult to to troubleshoot, occur in nondeterministic patterns and affect not only the user who came in on that thread but can bring down the entire application.

Reflecting on this scenario, it became apparent that:

  1. TransactionScope should always be limited to the confines of a using or Try/Finally block. Risking the fact that Dispose may never be called is too great.
  2. Keeping a transaction open during the entirety of a web request is extremely dangerous. It is unwise to assume that something, perhaps completely non database related, will never trigger a long running request and therefore lock up database resources that  again could bring down an entire site. This is one of the cardinal rules of ACID transactions to keep them as short as possible.

So I decided to eliminate the one transaction per request technique. We continue to use Unit of Work for Sessions, but developers will be responsible for defining their own transactions according to the needs of the request.

Failing to use explicit NHibernate Transactions within the TransactionScope

Having resolved to abandon a single master TransactionScope and continue to use the TransactionScopes currently sprinkled throughout the code, it seemed rational that we could simply invoke our NHibernate persistence calls within a TransactionScope and all would be well. Something like the following seemed innocent enough:

using(var scope = new TransactionScope()){    SaveEtitiesNotManagedByNH();    session.SaveOrUpdate(entity);     scope.Complete();}

Well this quickly started throwing an error I had never seen before when disposing the TransactionScope. The message was something like “The state of this transaction is in doubt” with some methods in the stack trace calling something like PhaseOneCommit and an inner timeout exception. I’ll admit that  I have not dug into the NHibernate source to see exactly what is happening here, but it sounds like a distributed transaction gone bad. My guess is that the NHibernate activity is treated like a distributed transaction even though it is operating on the same database as the on NHibernate code. When NHibernate saves the data, that operation enlists in the TransactionScope but NHibernate has no awareness that it is involved and therefore it will never Commits its end of the distributed transaction and causes the TransactionScope to timeout.

The Final Solution

So the final solution involved a nesting pattern like the one shown at the beginning of this post. However, to make it easier to implemented I created a wrapper to wrap both the TransactionScopoe and the NHibernate transaction:

public interface IOrmOuterTransactionScope : IDisposable{    void Complete();}

public class NhOuterTransactionScope : IOrmOuterTransactionScope{    private readonly ISession session;    private readonly TransactionScope scope;    private readonly ITransaction transaction;

    public NhOuterTransactionScope(ISession session)        : this(session, TransactionScopeOption.Required, new TransactionOptions {            IsolationLevel = IsolationLevel.RepeatableRead})    {    }

    public NhOuterTransactionScope( ISession session,                                     TransactionScopeOption transactionScopeOption,                                     TransactionOptions transactionOptions)    {        scope = new TransactionScope(transactionScopeOption, transactionOptions);        this.session = session;        transaction = session.BeginTransaction();    }

    public void Complete()    {        session.Flush();        transaction.Commit();        scope.Complete();    }

    public void Dispose()    {        try        {            transaction.Dispose();        }        finally        {            scope.Dispose();        }    }}

Using an NHOuterTransactionScope, avoids the need for developers to create and commit the separate NHibernate transaction and more importantly it enforces the appropriate nesting order.

The Perfect Build Part 3: Continuous Integration with CruiseControl.net and NANT for Visual Studio Projects by Matt Wrock

A couple months after migrating to subversion, we took another significant step to improve our build process by setting up a continuous integration server using CruiseControl.net and NANT. As explained in the previous post in this blog series, our new SVN repository structure supported a clear separation of development, staging and production environments. Now we needed something to assist in making sure that commits and changes to the repository resulted in actual builds being generated on the appropriate server.

Prior to this time, this was always a manual process where files were often dragged and dropped via windows explorer for simple deployments and for more complex deployments, services were stopped, sql scripts were executed and multiple applications were deployed in precisely timed order. As you can imagine, this was error prone, time consuming, stressful and sometimes not easily repeatable or easy to roll back.

Enter Cruise Control and NANT. Now, code promotions and deployments are nearly effortless and self documenting. One of my goals was to ensure that our CI (continuous integration) system made code promotions as simple as possible. I knew that the more difficult or cumbersome the system was, the less likely that my team would use it correctly or use it all.

This post is intended to offer a guided tour of our CI internals. I will go into some detail on our ccnet configuration and NANT scripting, but this is not intended to be a tutorial on the setup of cruise control and NANT scripting. It illustrates how my team has chosen to use these technologies, but every environment is different and I encourage you to dig into the documentation and other blogs to discover the full range of possibilities these tools provide.

In short, here is the flow of our CI system:

  1. All Visual Studio projects and solutions involved in an application are registered with CruiseControl via config file entries in ccnet.config.
  2. As code is committed to the trunk, ccnet automatically builds the project that contains the committed code. If there are errors, the dev team is alerted.
  3. When a project is successfully built, its owning solution(s) will be rebuilt and the build files will be deployed to a development server.
  4. When an application release is promoted to staging and the trunk is copied to the staging branch for that application, CruiseControl will build the solution in the SVN staging branch and deploy the build files to the staging servers.
  5. When a release is ready for production launch, an assigned team member will go to the CruiseControl dash board and “Force” the production build. This will rebuild the solution from the staging branch in SVN and deploy the build files to the production servers. It will then copy the staging branch to a production tag for that release.

Registering Projects and Solutions with CruiseControl

In order for Cruise Control to be aware of our Visual Studio projects and solutions and therefore build and deploy them, they must be registered in Cruise Control’s configuration file ccnet.config.

Originally we added all configuration settings (or <project>s) to this file directly. However, as we added multiple ccnet projects, this config file became quite large and unmanageable. To bring some order to this, we use XML entities to define external files that contain configuration for a particular application or team. This external file is treated like an “include” file. So if you are adding projects for a new application or team, create a new entity in the ccnet.config file. This is a two step process:

1. Define the entity within the <!DOCTYPE> of ccnet.config. Here is an example:

<!DOCTYPE cruisecontrol [
    <!ENTITY app1 SYSTEM "file: app1.xml">
    <!ENTITY app2 SYSTEM "file:app2.xml">
    <!ENTITY app3 SYSTEM "file:app3.xml">
]>

The entity defines a name and then points to a file which should be located in the same directory as ccnet.config.

2. Reference the entity name in ccnet.config where you want the content of the entity file to be included. All of the entities are referenced immediately after the opening <cruisecontrol> element:

    &app1;
    &app2;
    &app3;

After setting up the entity, you begin to add projects to the entity file. A “project” is an XML element in the ccnet.config that defines a branch in SVN that CruiseControl should watch for changes and then take action if changes are detected. There are several kinds of actions that CruiseControl can take. Typically, these actions include executing an NANT script, possibly running unit tests and sending an email.

We create a separate project element for each of the following:

  1. Every project in the VS solution.
  2. The VS Solution on trunk to be deployed to dev servers
  3. The VS Solution on the staging branch to be deployed to staging servers
  4. The VS Solution on staging to be deployed to production servers

CCNet Project Naming Conventions

We will dive into the details of a Cruise Control project in a moment, but let’s first cover project naming conventions. Every project is given a name and this is the same name that is displayed in the Cruise Control dashboard and desktop tray. Projects are listed in both of those areas in alphabetical order. The convention we use is as follows:

<app or team>_<TRUNK | STAGE | PROD>_<PROJ | SOLN | DPLY>_<project or solution name>

An Example name is: APP1_STAGE_SOLN_admin. This is part of the App1 team, from the SVN staging branch, it’s a solution and it’s the admin solution.

This makes looking at the cruise control dashboard much more user friendly especially when you have lots of projects like we do. Furthermore, cruise control allows you to group projects into logical “categories.” As you will see when we look at a project config later, cruise control allows you to give each project configuration a category label. You can then filter your projects in the Cruise Control dashboard easily using these categories.

CCNet Project Naming Conventions

We will dive into the details of a Cruise Control project in a moment, but let’s first cover project naming conventions. Every project is given a name and this is the same name that is displayed in the Cruise Control dashboard and desktop tray. Projects are listed in both of those areas in alphabetical order. The convention we use is as follows:

<app or team>_<TRUNK | STAGE | PROD>_<PROJ | SOLN | DPLY>_<project or solution name>

An Example name is: APP1_STAGE_SOLN_admin. This is part of the App1 team, from the SVN staging branch, it’s a solution and it’s the admin solution.

This makes looking at the cruise control dashboard much more user friendly especially when you have lots of projects like we do. Furthermore, cruise control allows you to group projects into logical “categories.” As you will see when we look at a project config later, cruise control allows you to give each project configuration a category label. You can then filter your projects in the Cruise Control dashboard easily using these categories.

Anatomy of <project>

Here is a typical Project configuration in ccnet.config:

<project name="App1_TRUNK_PROJ_AdminPanel" queue="trunk” >
    <workingDirectory>D:\BuildSys\CCNetProjects\Libraries\AdminPanel.Core</workingDirectory>
    <artifactDirectory>D:\BuildSys\CCNetProjects\Libraries\AdminPanel.Core\artifacts</artifactDirectory>
    <category>App1</category>
    <webURL>http://devweb01.com:8082</webURL>
    <modificationDelaySeconds>15</modificationDelaySeconds>
    <triggers>
        <intervalTrigger name="continuous" seconds="30" buildCondition="IfModificationExists"/>
    </triggers>
    <sourcecontrol type="svn">
        <trunkUrl>http://svnsrv01. com/repos/search/trunk/library/AdminPanel</trunkUrl>
        <workingDirectory>..\..\trunk\library\AdminPanel</workingDirectory>
        <executable>C:\Program Files (x86)\CollabNet Subversion Server\svn.exe</executable>
        <autoGetSource>false</autoGetSource>
        <tagOnSuccess>false</tagOnSuccess>
    </sourcecontrol>
    <tasks>
        <nant>
            <executable>D:\BuildSys\Executables\nant-0.86-beta1\bin\nant.exe</executable>
            <buildFile>D:\BuildSys\CCNetProjects\genericlibrary.build</buildFile>
            <buildArgs>-debug- -D:Project=AdminPanel -D:solutionorprojectfilename=AdminPanel.csproj -D:solutionlist=AdminPanel -D:branch=trunk -D:type=library -D:stagebase=</buildArgs>
            <targetList>
                    <target>TriggerSolutionBuild</target>
            </targetList>
        </nant>
    </tasks>
    <cb:include href="ccnetpublisherschange.xml"/>
</project>

Consult the CruiseControl documentation if you are interested in a detailed explanations of each element. This config tells ccnet to check SVN at http://svnsrv01.com/repos/search/trunk/library/AdminPanel every 30 seconds and to execute the genericlibrary.build NANT script if it detects any change in the repository.

Almost all of our <project> configs look like this. The configs that build VS solutions call a different NANT script and I’ll explain that in detail later. One element that should be removed from the config for production deployments is the <triggers> tag. Production project configs should just have an empty <trigger/> element. This is because, at least in our environment, you never want an automated process to kick off a deployment to live servers. We want to launch these builds manually. I realize that the word “manual” has certain negative connotations, but in this case it means clicking on the cruise control “Force” button. I don’t think that’s too much to ask.

CCNet Project Naming Conventions

We will dive into the details of a Cruise Control project in a moment, but let’s first cover project naming conventions. Every project is given a name and this is the same name that is displayed in the Cruise Control dashboard and desktop tray. Projects are listed in both of those areas in alphabetical order. The convention we use is as follows:

<app or team>_<TRUNK | STAGE | PROD>_<PROJ | SOLN | DPLY>_<project or solution name>

An Example name is: APP1_STAGE_SOLN_admin. This is part of the App1 team, from the SVN staging branch, it’s a solution and it’s the admin solution.

This makes looking at the cruise control dashboard much more user friendly especially when you have lots of projects like we do. Furthermore, cruise control allows you to group projects into logical “categories.” As you will see when we look at a project config later, cruise control allows you to give each project configuration a category label. You can then filter your projects in the Cruise Control dashboard easily using these categories.

Anatomy of <project>
Here is a typical Project configuration in ccnet.config:

<project name="App1_TRUNK_PROJ_AdminPanel" queue="trunk” >
    <workingDirectory>D:\BuildSys\CCNetProjects\Libraries\AdminPanel.Core</workingDirectory>
    <artifactDirectory>D:\BuildSys\CCNetProjects\Libraries\AdminPanel.Core\artifacts</artifactDirectory>
    <category>App1</category>
    <webURL>http://devweb01.com:8082</webURL>
    <modificationDelaySeconds>15</modificationDelaySeconds>
    <triggers>
        <intervalTrigger name="continuous" seconds="30" buildCondition="IfModificationExists"/>
    </triggers>
    <sourcecontrol type="svn">
        <trunkUrl>http://svnsrv01. com/repos/search/trunk/library/AdminPanel</trunkUrl>
        <workingDirectory>..\..\trunk\library\AdminPanel</workingDirectory>
        <executable>C:\Program Files (x86)\CollabNet Subversion Server\svn.exe</executable>
        <autoGetSource>false</autoGetSource>
        <tagOnSuccess>false</tagOnSuccess>
    </sourcecontrol>
    <tasks>
        <nant>
            <executable>D:\BuildSys\Executables\nant-0.86-beta1\bin\nant.exe</executable>
            <buildFile>D:\BuildSys\CCNetProjects\genericlibrary.build</buildFile>
            <buildArgs>-debug- -D:Project=AdminPanel -D:solutionorprojectfilename=AdminPanel.csproj -D:solutionlist=AdminPanel -D:branch=trunk -D:type=library -D:stagebase=</buildArgs>
            <targetList>
                    <target>TriggerSolutionBuild</target>
            </targetList>
        </nant>
    </tasks>
    <cb:include href="ccnetpublisherschange.xml"/>
</project>

Consult the CruiseControl documentation if you are interested in a detailed explanations of each element. This config tells ccnet to check SVN at http://svnsrv01.com/repos/search/trunk/library/AdminPanel every 30 seconds and to execute the genericlibrary.build NANT script if it detects any change in the repository.

Almost all of our <project> configs look like this. The configs that build VS solutions call a different NANT script and I’ll explain that in detail later. One element that should be removed from the config for production deployments is the <triggers> tag. Production project configs should just have an empty <trigger/> element. This is because, at least in our environment, you never want an automated process to kick off a deployment to live servers. We want to launch these builds manually. I realize that the word “manual” has certain negative connotations, but in this case it means clicking on the cruise control “Force” button. I don’t think that’s too much to ask.

Building Visual Studio Projects and Solutions

The way we structured the repository monitoring strategy of our CI system was to have separate cruise control projects that watch for changes in a single Visual Studio project. If the project build is successful, then the project build script (genericlibrary.build) creates a dummy text file and adds it to a special SVN branch for each VS solution that depends on it. In our environment, a Visual Studio solution is essentially an application. We have another set of cruise control projects that monitor the above SVN branches and build the entire app (VS solution) when they see a modification to the branch which is triggered by the addition of the dummy text files.

A closer look at NANT build script genericlibrary.build

NANT is a powerful build scripting framework based on ANT. It provides functionality for source control and file system operations along with many other features. It has all of the flow control capabilities supported by most programming languages along with the ability to declare, inspect and manipulate variables. In our environment, the NANT script is the work horse of the build process.

Our genericlibrary.build script handles all commits on individual VS Projects. It simply checks out all source code in the project, builds it and then creates the dummy text file discussed above which triggers a solution build.

Here is our genericlibrary.build script:
<project>
        <description>${Project} build</description>
    <!-- debug mean tons of output so watch it -->
        <property name="debug" value="true" overwrite="false" />
    <!-- which framework -->
    <property name="nant.settings.currentframework" value="net-3.5" />
    <!-- path to project or solution file -->
    <property name="projectpath" value="${branch}\${stagebase}\${type}\${Project}\${solutionorprojectfilename}" />

    <target name="update" >
        <exec program="svn.exe" workingdir="${branch}" commandline="update" />
    </target>

    <target name="compile" depends="update">
         <!-- REBUILD -->
            <loadtasks assembly="..\Executables\nantcontrib-0.85\bin\NAnt.Contrib.Tasks.dll" />
         <msbuild project="${projectpath}" target="Rebuild" verbosity="Minimal">
               <property name="Configuration" value="Debug" />   
            <property name="Platform" value="AnyCPU" />  
               <arg value="/nologo"/>                          
         </msbuild>
    </target>

    <target name="TriggerSolutionBuild" depends="compile">
        <foreach item="String" in="${solutionlist}" delim=" ," property="solution">
            <exec program="..\Executables\MakeUniqueFile.bat" commandline="uniquefile.txt" workingdir="libraries/${Project}"  />
            <exec program="svn.exe" workingdir="${branch}" failonerror="false">
                <arg line="delete -m "/>
                <arg value="trigger solution build"/>
                <arg line="http://svnsrv01. com/repos/search/ccnet_solutions/${branch}/${solution}/uniquefile.txt"/>
            </exec>
            <exec program="svn.exe" workingdir="${branch}" >
                <arg line="import -m "/>
                <arg value="trigger solution build"/>
                <arg line="${project::get-base-directory()}\libraries\${Project}\uniquefile.txt http://svnsrv01.com/repos/search/ccnet_solutions/${branch}/${solution}/uniquefile.txt"/>
            </exec>
        </foreach>
    </target>
</project>

NANT scripts contain one to many <target> nodes. Each of these represents a distinct unit of work. They may also depend on other target nodes. If a target node depends on another, the target specified in the depends attribute is executed first. When the ccnet.config <project> executes a NANT script, the config can specify a TargetList of targets to be called in specific order.

The solution NANT script

Here is the script that builds the solution.

    <project>
    <description>${foldername} build</description>
    <property name="debug" value="true" overwrite="false" />
    <property name="nant.settings.currentframework" value="net-3.5" />
    <property name="publish.dir" value="Apps\${Project}\artifacts\buildarchive\${branch}\${CCNetLabel}" />
    <property name="publish.dir.current" value="Apps\${Project}\artifacts\buildarchive\${branch}\${foldername}" />
    <property name="projectfile" value="${branch}\${stagebase}\${solutionorprojectfilename}" />
    <property name="executablepath" value="${branch}\${stagebase}\${type}\${foldername}${execsuffix}" />
    <property name="projectpath" value="${branch}\${stagebase}\${type}\${foldername}" />
    <property name="deploy_to" value="" unless="${property::exists('deploy_to')}"  />
    <if test="${type=='web'}">
        <property name="stage_config_file" value="${branch}\${stagebase}\${type}\${foldername}\Web${branch}.config" />
    </if>
    <if test="${type=='services'}">
        <property name="stage_config_file" value="${branch}\${stagebase}\${type}\${foldername}\App${branch}.config" />
    </if>
    <target name="echo" >
        <echo message="${executablepath} solution at ${projectfile}"  />
    </target>

    <target name="update" depends="echo">
        <if test="${string::contains(string::to-lower(branch), 'stage')}">
            <exec program="svn.exe" workingdir="D:\BuildSys\CCNetProjects\${branch}\${stagebase}" commandline="update" />
        </if>
        <if test="${string::contains(string::to-lower(branch), 'trunk')}">
            <exec program="svn.exe" workingdir="D:\BuildSys\CCNetProjects\${branch}" commandline="update" />
        </if>
    </target>
    <target name="compile" depends="update">
         <!-- REBUILD -->
            <loadtasks assembly="..\Executables\nantcontrib-0.85\bin\NAnt.Contrib.Tasks.dll" />
         <msbuild project="${projectfile}" target="Rebuild" verbosity="Minimal">
               <property name="Configuration" value="Debug" />   
            <property name="Platform" value="Any CPU" />  
               <arg value="/nologo"/>                          
               <arg value="/consoleloggerparameters:ErrorsOnly"/>
         </msbuild>
    </target>

    <target name="copy" depends="compile">
        <delete failonerror="false">
            <fileset>
                <include name="${publish.dir}\*.*"/>
                <include name="${publish.dir.current}\*.*" />
            </fileset>
        </delete>
        <mkdir dir="${publish.dir.current}" failonerror="false" />
        <copy todir="${publish.dir.current}">
            <fileset basedir="${executablepath}">
                <exclude name="*.csproj"/>
                <exclude name="*.csproj.user"/>
                <exclude name="*.sln"/>
                <exclude name="*.cs"/>
                <exclude name="*.resx"/>
                <exclude name="user*.config"/>
                <exclude name="Webtrunk.config"/>
                <exclude name="Webstage.config"/>
                <exclude name="obj"/>
                <include name="*.asax"/>
                <include name="*.aspx"/>
                <include name="*.asp"/>
                <include name="*.cab"/>
                <include name="*.config"/>
                <include name="*.css"/>
                <include name="*.dll"/>
                <include name="*.exe"/>
                <include name="*.gif"/>
                <include name="*.htm"/>
                <include name="*.html"/>
                <include name="*.jpg"/>
                <include name="*.js"/>
                <include name="*.pdb"/>
                <include name="*.php"/>
                <include name="*.swf"/>
                <include name="*.txt"/>
                <include name="**\**\*.*"/>
                <include name="*.xml"/>
                <include name="*.axd"/>
            </fileset>
        </copy>
    </target>
    <target name="publish" depends="copy">
        <if test="${not property::exists('CCNetLabel')}">
            <fail message="CCNetLabel property not set, so can't create labelled distribution files" />
        </if>   
        <mkdir dir="${publish.dir}" failonerror="false" />
        <copy todir="${publish.dir}">
            <fileset basedir="${publish.dir.current}">
                <include name="*.*"/>
                <include name="**\**\*"/>
            </fileset>
        </copy>       
    </target>
    <target name="deploy" depends="publish">
        <foreach item="String" in="${serversuffixlist}" delim=", " property="count">
            <if test="${servicename != ''}">
                <servicecontroller action="Stop" service="${servicename}" machine="${serverpfx}${count}.com" timeout="120000" />
            </if>
            <copy todir="\\${serverpfx}${count}\${destinationpath}" overwrite="true">
                <fileset basedir="${publish.dir}" >
                    <include name="*" />
                    <include name="**\*" />
                    <include name="**\**\*" />
                </fileset>
            </copy>
            <if test="${branch!='prod'}">
                <copy file="${projectpath}\user${branch}.config" tofile="\\${serverpfx}${count}. com\${destinationpath}\user.config" overwrite="true" />
            </if>
            <if test="${string::contains(string::to-lower(deploy_to), 'prod')}">
                <delete file="\\${serverpfx}${count}.com\${destinationpath}\user.config" />
            </if>

            <if test="${not string::contains(string::to-lower(deploy_to), 'prod')}">
                    <echo message="Property deploy_to is not set for production deployement, copying ${stage_config_file} to application config file" />
                <if test="${type=='web'}">
                    <copy file="${projectpath}\Web${branch}.config" tofile="\\${serverpfx}${count}.com\${destinationpath}\Web.config" overwrite="true" failonerror="false" if="${file::exists(stage_config_file)}"/>
                </if>
                <if test="${type=='services'}">
                    <copy file="${projectpath}\App${branch}.config" tofile="\\${serverpfx}${count}.com\${destinationpath}\${project::get-name()}.exe.config" overwrite="true"  failonerror="false" if="${file::exists(stage_config_file)}"/>
                </if>
            </if>
            <if test="${servicename != ''}">
                <echo message="${servicename} successfully deployed to ${serverpfx}${count}.com"  />
                <servicecontroller action="Start" service="${servicename}" machine="${serverpfx}${count}.com" timeout="120000" />
                <sleep minutes="1" />
            </if>
        </foreach>
       
        <if test="${string::contains(string::to-lower(deploy_to), 'prod')}">
            <call target="svnprodtag"/>
        </if>
    </target>
    <target name="svnprodtag">
        <tstamp property="build.date" pattern="yyyyMMdd" verbose="true" />
        <exec program="svn.exe" failonerror="false">
            <arg value="delete" />
            <arg value="http://svnsrv01.com/repos/search/prod/${stagebase}/${build.date}" />
            <arg value="-m &quot;Nant : Delete production branch before creating new one.&quot;" />
        </exec>
        <exec program="svn.exe" >
            <arg value="copy" />
            <arg value="http://svnsrv01.com/repos/search/staging/${stagebase}" />
            <arg value="http://svnsrv01.com/repos/search/prod/${stagebase}/${build.date}" />
            <arg value="-m &quot;Nant : Create production branch with automated deployment.&quot;" />
        </exec>
    </target>
    </project>

This script pulls down the latest source code from SVN, builds it, then on each target server, the appropriate service is stopped, all binary output files and web content files are copied over, and the service is started. Finally, if this is a production deployment, a tag is created on the SVN production branch which serves as a snapshot of all source code at that moment in time.

Another important task which this script handles is making sure that the correct config file is copied to the deployment servers depending on the deployment environment. We will discuss this in more detail a little later.

The build script parameters

Both of our NANT scripts expect a number of input paramaters to tell it where the code exists in SVN, where it should be copied to, what service to restart, what solution builds to trigger, etc. These paramaters are specified in the <buildArgs> tag inside of the ccnet.config <project>. These parameters will be different depending on if you are creating a ccnet project for a VS project or solution. Here are examples:

VS Project:
<buildArgs>-debug- -D:Project=AdminPanel -D:solutionorprojectfilename=AdminPanel.csproj -D:solutionlist=AdminPanel -D:branch=trunk -D:type=library -D:stagebase=</buildArgs>

VS Solution:
<buildArgs>-debug- -D:serverpfx=devweb -D:execsuffix= -D:type=web -D:destinationpath=inetpub\wwwroot\adminpanel -D:solutionName=AdminPanel -D:solutionorprojectfilename=AdminPanel.sln -D:foldername=AdminPanel -D:servicename="W3SVC" -D:branch=trunk -D:serversuffixlist=01 -D:stagebase= -D:deploy_to=prod</buildArgs>

A VS Project build is much simpler than a Solution build. Project builds simply builds the project files and then creates a dummy file that gets committed to SVN for every dependent solution in the solutionlist param, which will trigger a solution build. A Solution build builds the entire solution, stops the servicename on every target server, copies the output files to the target servers and then starts the servicename. If the solution build is targeting production, it will take the additional step of creating a build tag in SVN.

The values in the buildArgs element parameters are name-value pairs:

–debug- turns debug off. If this were on all the time we would quickly run out of disk space due to output volume
–D:type is the name of the trunk parent folder that contains the project folder of the project in question. It will typically be “library” for basic class library projects or “web” for web applications.
–D:Project is the name of the project folder in the SVN trunk as trunk\${type}\${Project}
–D:solutionorprojectfilename is the name of the project file. This would be a .csproj file for projects and a .sln file for solutions. The file is relative to the folder given by the Project param for project builds. In our repository, all solution (.sln) files are located in the root of the trunk branch.
–D:solutionlist is a comma separated list of the solutions in which the project in question occurs. The solution name is actually the name of the folder that the solution build watches for changes. The project build will write a text file into this folder to trigger the solution build.
–D:branch is the name of the target branch to build, either “trunk” or “stage”
–D:stagebase is the name of the subfolder of the staging SVN branch that contains the application being built and deployed. It must be empty for trunk builds.
–D:serverpfx is the first part of the servername without the numeric suffix that code should be deployed to (ex. Devweb, web, etc).
–D:serversuffixlist is a comma separated list of suffixes for the servers to which deployment should be done. The first segment of server name is formed from serverpfx and a suffix, for example web and 01,02,03 will deploy to servers named web01, web02 and web03.
–D:destinationpath is the path on the server to which deployments will be done to receive the build files. For example, for a service this would be Services\<servicefoldername> or a web project might be inetpub\wwwroot\<webfoldername>.
–D:execsuffix is the path segment, relative to destinatioinpath, needed to access the file directory on the target servers that contain the build binaries. This is usually empty for web projects but “\bin\debug” for windows service projects.
–D:foldername is the name of the solution’s application project folder. This would be the project that hosts the application. Typically this would be the web or service project.
–D:servicename is W3SVC for a web project and the actual service name for a windows service project. During deployment this service is stopped and started
-D:deploy_to should be equal to “prod” for production deployment projects and otherwise empty.

An important note on app/web.config management

An application usually needs different configuration settings depending on its environment (development, staging, production). Our generic NANT scripts helps us manage this, but we must follow a convention in order for this to work.

  1. In the <appsettings> element of the web/app.config file, we add this attribute: file=user.config. If a user.config file exists, any settings in that file will override those in the web/app.config.
  2. A typical app will have up to 3 extra config files: user.config, userstage.config, usertrunk.config. user.config contains specific settings local to a developers personal machine, userstage is specific to the staging server(s) and usertrunk is specific to the shared development environment. The settings in the actual web/app.config are the production settings.
  3. The generic NANT scripts will dynamically rename userstage.config to user.config on staging promotions and will rename usertrunk.config to user.config on dev. It does not deploy any of the user.configs on production promotions.

This convention ensures that the configuration settings unique to a specific environment can be kept under source control but that only the applicable settings are actually used in the appropriate environment.

What’s Missing?

Two things are embarrassingly deficient here:

  1. Where are the nUnit tests in the NANT scripts? Yes, we have got to incorporate our nUnit tests into the CI system. Individual developers run the tests locally but that is not good enough. NANT has the capability to run nUnit tests and fail a build if the tests do not pass.
  2. If you look at the parameters being passed into the NANT scripts, you will notice that the strings look very similar. We should be following the principal of convention over configuration here to make this much simpler to setup and easier to read. Rather than having several parameters with values of app1, app1.sln, d:/inetpub/wwwroot/app1, etc., we should collapse all of these parameters to a single parameter that takes an application name app1 and then embeds that into the other configuration strings.

Learning More

There is lots of good information out there on continuous integration generally and Cruise Control and NANT specifically. The docs for Cruise Control and NANT are a good starting point.

Cruise Control documentation: http://confluence.public.thoughtworks.org/display/CCNET/Documentation

NANT documentation: http://nant.sourceforge.net/release/latest/help/

While there is room for improvement, this system has worked great for us and has made our lives much easier. I can’t stress enough the importance of being able to deploy code with a press of a button. It took us a long time to make the implementation of this system a priority, but considering the time it saves us to work on feature development, we should have built it much earlier.

Implementing custom Membership Provider and Role Provider for Authenticating ASP.NET MVC Applications by Matt Wrock

The .NET framework provides a provider model that allows developers to implement common user management and authentication functionality in ASP.NET applications. Out of the box, the framework ships with a few providers that allow you to easily wire up a user management system with very little to zero back end code. For instance, you can use the SqlMembershipProvider and the SqlRoleProvider which will create a database for you with tables for maintaining users and their roles. There are also providers available that use Active Directory as the user and role data store.

This is great in many cases and can save you from writing a lot of tedious plumbing code. However, there are many instances where this may not be appropriate:

  • I personally don’t feel comfortable having third party code create my user schema. Perhaps I’m being overly sensitive here.
  • You don’t use Sql Server or Active Directory. If you are using MySql to manage your user data, neither the out of the box providers will work for you.
  • You already have your own user data and schema which you want to continue using. You will need to implement your own providers to work with your schema.

I fall into the first and third scenarios listed above. I have multiple applications that all run on top of a common user/role management database. I want to be able to use Forms Authentication to provide my web security and I want all user validation and role checking to leverage the database schema that I already have. Fortunately, creating custom Membership and Role providers proved to be rather easy and this post will walk you through the necessary steps to get up and running.

Defining Users, Roles and Rights

My schema has the following key tables:
A users table which defines individual users and their user names, human names, passwords and role.

  • A role table that defines individual roles. It’s a simple lookup table with role_id and role_name columns.
  • A right table which is also a simple lookup table with Right_id and right_name columns.
  • A role_right table which defines many to many relationships between roles and rights. It has a role_id and right_id columns. Individual roles contain a collection of one to N number of rights.

These tables are mapped to classes via nHibernate.

Here is the User class:

public class User : IPrincipal    {
  protected User() { }
  public User(int userId, string userName, string fullName, string password)
  {
    UserId = userId;
    UserName = userName;
    FullName = fullName;
    Password = password;
  }
  public virtual int UserId { get; set; }
  public virtual string UserName { get; set; }
  public virtual string FullName { get; set; }
  public virtual string Password { get; set; }
  public virtual IIdentity Identity { get; set; }
  public virtual bool IsInRole(string role)
  {
    if (Role.Description.ToLower() == role.ToLower())
      return true;
    foreach (Right right in Role.Rights)
    {
      if (right.Description.ToLower() == role.ToLower())
        return true;
    }
    return false;
  }
}

You will notice that User derives from IPrincipal. This is not necessary to allow User to operate with my MembershipProvider implementation, but it allows me the option to use the User class within other frameworks that work with IPrincipal. For example, I may want to be able to directly interact with User when calling the Controller base class User property. This just requires me to implement the Identity property and the bool IsInRole(string roleName) method. When we get to the RoleProvider implementation, you will see that the IsInRole implementation is used there. You may also notice something else that may appear peculiar: a role here can be either a role or a right. This might be a bit of a hack to shim my finer grained right based schema into the ASP.NET role based framework, but it works for me.

Here are the Role and Right classes. They are simple data objects:

public  class Role
{
  protected Role() { }

  public Role(int roleid, string roleDescription)
  {
    RoleId = roleid;
    Description = roleDescription;
  }

  public virtual int RoleId { get; set; }
  public virtual string Description { get; set; }
  public virtual IList<Right> Rights { get; set; }
}

public class Right
{
  protected Right() { }
  public Right(int rightId, string description)
  {
    RightId = rightId;
    Description = description;
  }

  public virtual int RightId { get; set; }
  public virtual string Description { get; set; }
}

Implementing the providers

So now with the basic data classes behind us, we can implement the membership and role providers. These implementations must derrive from MembershipProvider and RoleProvider. Note that these base classes contain a lot of methods that I have no use for. The Membership Provider model was designed to handle all sorts of user related functionality like creating users, modifying paswords, etc. I just need basic logon and role checking; so a lot of my methods throw NotImplementedExceptions. However, all methods required to handle authentication and role checking are implemented.

Here is the Membership Provider:

public class AdminMemberProvider : MembershipProvider
{
  public override string ApplicationName
  {
    get { throw new NotImplementedException(); }
    set { throw new NotImplementedException(); }
  }
  public override bool ChangePassword(string username, string oldPassword, string newPassword)
  {
    throw new NotImplementedException();
  }
  public override bool ChangePasswordQuestionAndAnswer(string username, string password, string newPasswordQuestion, string newPasswordAnswer)
  {
    throw new NotImplementedException();
  }
  public override MembershipUser CreateUser(string username, string password, string email, string passwordQuestion, string passwordAnswer, bool isApproved, object providerUserKey, out MembershipCreateStatus status)
  {
    throw new NotImplementedException();
  }      
  public override bool DeleteUser(string username, bool deleteAllRelatedData)
  {
    throw new NotImplementedException();
  }
  public override bool EnablePasswordReset
  {
    get { throw new NotImplementedException(); }
  }
  public override bool EnablePasswordRetrieval
  {
    get { throw new NotImplementedException(); }
  }
  public override MembershipUserCollection FindUsersByEmail(string emailToMatch, int pageIndex, int pageSize, out int totalRecords)
  {
    throw new NotImplementedException();
  }
  public override MembershipUserCollection FindUsersByName(string usernameToMatch, int pageIndex, int pageSize, out int totalRecords)
  {
    throw new NotImplementedException();
  }
  public override MembershipUserCollection GetAllUsers(int pageIndex, int pageSize, out int totalRecords)
  {
    throw new NotImplementedException();
  }
  public override int GetNumberOfUsersOnline()
  {
    throw new NotImplementedException();
  }
  public override string GetPassword(string username, string answer) 
  {
    throw new NotImplementedException();
  }
  public override MembershipUser GetUser(string username, bool userIsOnline)
  {
    throw new NotImplementedException();
  }
  public override MembershipUser GetUser(object providerUserKey, bool userIsOnline)
  {
    throw new NotImplementedException();
  }
  public override string GetUserNameByEmail(string email)
  {
    throw new NotImplementedException();
  }
  public override int MaxInvalidPasswordAttempts
  {
    get { throw new NotImplementedException(); }
  }
  public override int MinRequiredNonAlphanumericCharacters
  {
    get { throw new NotImplementedException(); }
  }
  public override int MinRequiredPasswordLength
  {
    get { throw new NotImplementedException(); }
  }
  public override int PasswordAttemptWindow
  {
    get { throw new NotImplementedException(); }
  }
  public override MembershipPasswordFormat PasswordFormat
  {
    get { throw new NotImplementedException(); }
  }
  public override string PasswordStrengthRegularExpression
  {
    get { throw new NotImplementedException(); }
  }
  public override bool RequiresQuestionAndAnswer
  {
    get { throw new NotImplementedException(); }
  }
  public override bool RequiresUniqueEmail
  {
    get { throw new NotImplementedException(); }
  }
  public override string ResetPassword(string username, string answer)
  {
    get { throw new NotImplementedException(); }
  }
  public override bool UnlockUser(string userName)
  {
    get { throw new NotImplementedException(); }
  }
  public override void UpdateUser(MembershipUser user)
  {
    get { throw new NotImplementedException(); }
  }
  IUserRepository _repository;
  public AdminMemberProvider() : this(null) { }
  public AdminMemberProvider(IUserRepository repository) : base()
  {
    _repository = repository ?? UserRepositoryFactory.GetRepository();
  }
  public User User { get; private set; }
  public UserAdmin.DataEntities.User CreateUser(string fullName,string passWord, string email)
  {
    return (null);
  }
  public override bool ValidateUser(string username, string password)
  {
    if(string.IsNullOrEmpty(password.Trim())) return false;
    string hash = EncryptPassword(password);
    User user = _repository.GetByUserName(username);
    if (user == null) return false;
    if (user.Password == hash)
    {
      User = user;
      return true;
    }
    return false;
  }
  
  /// <summary>
  /// Procuses an MD5 hash string of the password
  /// </summary>
  /// <param name="password">password to hash</param>
  /// <returns>MD5 Hash string</returns>
  protected string EncryptPassword(string password)
  {
    //we use codepage 1252 because that is what sql server uses
    byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password);
    byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes);
    return Encoding.GetEncoding(1252).GetString(hashBytes);
  }
}

The key method implemented here is ValidateUser. This uses the Repository pattern to query the user by name and then compares the password in the repository with the password passed to the method. My default repository is backed by nHibernate, but it could be plain ADO or a fake repository for testing purposes. You will need to implement your own UserRepositoryFactory based on your data store. Note that I am encrypting the passed password before the comparison. This is because our passwords are hashed in the database.

Here is the Role Provider:

public class AdminRoleProvider : RoleProvider
{
  IUserRepository _repository;
  public AdminRoleProvider(): this(UserRepositoryFactory.GetRepository()) { }
  public AdminRoleProvider(IUserRepository repository) : base()
  {
    _repository = repository ?? UserRepositoryFactory.GetRepository();
  }
  public override bool IsUserInRole(string username, string roleName)
  {
    User user = _repository.GetByUserName(username);
    if(user!=null)
      return user.IsInRole(roleName);
    else
      return false;
  }
  public override string ApplicationName
  {
    get { throw new NotImplementedException(); }
    set { throw new NotImplementedException(); }
  }
  public override void AddUsersToRoles(string[] usernames, string[] roleNames)
  {
    throw new NotImplementedException();
  }
  public override void RemoveUsersFromRoles(string[] usernames, string[] roleNames)
  {
    throw new NotImplementedException();
  }
  public override void CreateRole(string roleName)
  {
    throw new NotImplementedException();
  }
  public override bool DeleteRole(string roleName, bool throwOnPopulatedRole)
  {
    throw new NotImplementedException();
  }
  public override bool RoleExists(string roleName)
  {
    throw new NotImplementedException();
  }
  public override string[] GetRolesForUser(string username)
  {
    User user = _repository.GetByUserName(username);
    string[] roles = new string[user.Role.Rights.Count + 1];
    roles[0] = user.Role.Description;
    int idx = 0;
    foreach (Right right in user.Role.Rights)
      roles[++idx] = right.Description;
    return roles;
  }
  public override string[] GetUsersInRole(string roleName)
  {
    throw new NotImplementedException();
  }
  public override string[] FindUsersInRole(string roleName, string usernameToMatch)
  {
    throw new NotImplementedException();
  }
  public override string[] GetAllRoles()
  {
    throw new NotImplementedException();
  }
}

Again, many methods of the base class are unimplemented because I did not need the functionality.

Wiring the providers in web.config

This is all the code we need for the “model” level user authentication and role checking. Next we have to wire these classes up in our web.config so that the application knows to use them. The following should be inside of <system.web>:

<authentication mode="Forms" >
  <forms loginUrl="~/LoginAccount/LogOn" path="/" />
</authentication>
<authorization>
  <deny users="?"/>
</authorization>
<membership defaultProvider="AdminMemberProvider" userIsOnlineTimeWindow="15">
  <providers>
    <clear/>
    <add name="AdminMemberProvider" type="UserAdmin.DomainEntities.AdminMemberProvider, UserAdmin" />
  </providers>
</membership>
<roleManager defaultProvider="AdminRoleProvider" enabled="true" cacheRolesInCookie="true">
  <providers>
    <clear/>
    <add name="AdminRoleProvider" type="UserAdmin.DomainEntities.AdminRoleProvider, UserAdmin" />
  </providers>
</roleManager>

Brining in the Controller and View

The only thing left now is to code the controllers. First our LoginAccountController needs to be able to log users in and out of the application:

Here is our login form in the view:

public class LoginAccountController : Controller
{
  UserMemberProvider provider = (UserMemberProvider) Membership.Provider;
  public LoginAccountController() {}
  public ActionResult LogOn() { return View(); }

  [AcceptVerbs(HttpVerbs.Post)]
  public ActionResult LogOn(string userName, string password, string returnUrl)
  {
    if (!ValidateLogOn(userName, password))
      return View();
    UserAdmin.DataEntities.User  user = provider.GetUser();
    FormsAuthentication.SetAuthCookie(user.UserName, false);
    if (!String.IsNullOrEmpty(returnUrl) && returnUrl != "/")
      return Redirect(returnUrl);
    else
      return RedirectToAction("Index", "Home");
  }

  public ActionResult LogOff()
  {
    FormsAuthentication.SignOut();
    return RedirectToAction("Index", "Home");
  }

  private bool ValidateLogOn(string userName, string password)
  {
    if (String.IsNullOrEmpty(userName))
    {
      ModelState.AddModelError("username", "You must specify a username.");
    }
    if (String.IsNullOrEmpty(password))
    {
      ModelState.AddModelError("password", "You must specify a password.");
    }
    if (!provider.ValidateUser(userName, password))
    {
      ModelState.AddModelError("_FORM", "The username or password provided is incorrect.");
    }
    return ModelState.IsValid;
  }
}
<div id="errorpanel">
    <%= Html.ValidationSummary() %></div>

<div class="signinbox">
   <% using (Html.BeginForm()) { %>
     <table>
       <tr>
         <td>Email:</td>
         <td>
           <%= Html.TextBox("username", null, new { @class = "userbox"}) %>
         </td>
        </tr>
        <tr>
          <td>Password:</td>
          <td>
            <%= Html.Password("password", null, new { @class = "password"}) %>
          </td>
        </tr>
        <tr>
          <td>
            <input type="image" src="/Content/images/buttonLogin.gif"
              alt="Login" width="80" height="20" border="0" id="Image1"
              name="Image1">
          </td>
          <td align="right" valign="bottom"></td>
        </tr>
      </table>
   <% } %>
</div>

That’s it. It’s really pretty simple.