How Do I? – AJAX


I have been seeing a lot of people asking how they get AJAX working in their site. They can get a sample site up and running, but when they begin dropping AJAX in their own site, it fails.

The error encountered here is concerns the AJAX controls not being recognized and can be fixed rather easily by opening your test project and doing a bit of copying. I am first going to focus on getting the site working. We can then look at other issues.

Migrating Code to Make AJAX work

The first task is getting a current site working.

Copy the <ConfigSections> section

<ConfigSections> contains information necessary to wire up your AJAX application. You will need at least most of these to get things working. It is possible some of these are not important. NOTE: This is a 3.5 AJAX site. The version numbers are different in 2.0.

<configSections>
  <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
    <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/>
      <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
        <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" />
        <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
        <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
        <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
      </sectionGroup>
    </sectionGroup>
  </sectionGroup>
</configSections>

Nearly all of these are needed to get AJAX working. If you do not do JSON, you can drop off the jsonSerialization element, but I see no reason to do this. There are some others you can drop off as well, like the profileService and roleService. Note, however, that doing this really gives you nothing and will force you to add them back in if you ever decide to use these features.

Copy the <assemblies> section

In this section, you are setting up the working bits for the pages by importing the assemblies. There are ways around this (declaring page by page), but this is the easiest.

<add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>

If you are using additional AJAX bits, you may have other declarations here to copy over.

Copy the <pages> section

This is what allows you to add controls to your page. You can also register on a page by page basis and avoid copying this, but why?

<pages>
  <controls>
    <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  </controls>
</pages>

Copy the <httpHandlers> and <httpModules> sections

This section makes it possible for AJAX calls to be handled by your application. You will not get a compile error if you do not copy this, but you sure will not get a working application either.

<httpHandlers>
  <remove verb="*" path="*.asmx"/>
  <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/>
</httpHandlers>
<httpModules>
  <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</httpModules>

Set up for IIS 7?

There are two additional sections you will need if your site is going to IIS 7. These sections do very little for development and can, therefore, be called optional. If you are developing in .NET 3.5, I would copy them anyway.

<system.webServer>
  <validation validateIntegratedModeConfiguration="false"/>
  <modules>
    <remove name="ScriptModule" />
    <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  </modules>
  <handlers>
    <remove name="WebServiceHandlerFactory-Integrated"/>
    <remove name="ScriptHandlerFactory" />
    <remove name="ScriptHandlerFactoryAppServices" />
    <remove name="ScriptResource" />
    <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode"
         type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode"
         type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
  </handlers>
</system.webServer>

<runtime>
  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <dependentAssembly>
      <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/>
      <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/>
    </dependentAssembly>
    <dependentAssembly>
      <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/>
      <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/>
    </dependentAssembly>
  </assemblyBinding>
</runtime>

Older Versions of ASP.NET AJAX

What if you are using an older version of ASP.NET AJAX? Simple, copy from a sample web that uses the older version. In general, the version number and possibly the PublicKeyToken are the main thing that will change. There may be a declaration or two not present in the older version, as well. If you copy from a sample site using the older version, however, you should be fine.

Additional AJAX Bits?

Since the 3.5 release, Microsoft has been working on newer extensions, including some AJAX bits. If you are using them, you will have to change the version numbers from 3.5.0.0 to 3.6.0.0. There is also a new section name, but you only need it if you are using dynamic data:

<section name="dynamicData" type="System.Web.DynamicData.DynamicDataControlsSection" requirePermission="false" allowDefinition="MachineToApplication" />

Under assemblies, the same. Change version number and only add the following if using dynamic data:

<add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>

With extensions, there are additional tags in the pages section, but you only need to add these if you are using the features (dynamic data or silverlight):

<add tagPrefix="asp" namespace="System.Web.UI.SilverlightControls" assembly="System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add tagPrefix="asp" namespace="System.Web.DynamicData" assembly="System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add tagPrefix="asp" tagName="DynamicFilter" src="~/App_Shared/DynamicDataFields/FilterUserControl.ascx" />

Under <httpHandlers> you only need to change the version numbers. Under <httpModules> you have additional tags, but only if you are using Dynamic Data or MVC:

<add name="DynamicDataModule" type="System.Web.DynamicData.DynamicDataHttpModule, System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add name="UrlRoutingModule" type="System.Web.Mvc.UrlRoutingModule, System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

Under <system.web.extensions>, there are bits for Dynamic data. You do not have to add these for AJAX only. Finally, the <system.webServer> has additional bits if you are using either Dynamic Data or MVC. The only thing you will change for AJAX alone is the version number.

If you are using the AJAX toolkit, there is a designer tags in the <assemblies> section. Other than that, you are golden.

Summary

It is quite easy to prep an older site for AJAX, if you simply move web.config bits over from a site created as an AJAX site. After copy and paste, the controls should register fine.

I meant to find Scott Guthrie’s older blog entry on this, but have not had the chance. Maybe later? It was in the Atlas timeframe (code name for ASP.NET AJAX), so the bits are older (version numbers different primarily).

Peace and Grace,
Greg

Advertisements

James Tembo, Detective – A Perspective on Zambia


A few weeks ago, I wrote a review for the movie The Great Debaters. In response, I was contacted by Kevin Hansston to look at a film called James Tembo, Detective. I ordered a copy off their site that night(http://www.jamestembo.com) and received it a couple of days ago. While many might have asked for a review copy, I did not want to have any idea that I might have been writing a "for hire" review.

The film is a re-imagining of the Resurrection story in modern Zambia. In the story, a prophet named Joseph has been killed by the religious leaders, who hire a mercenary "detective" named James Tembo. His job is to find the body, or at least write a report that glosses it over.

If you are looking for a slick Hollywood film with professional acting, this is not your vehicle. The production values are a bit low, as most of the film has been shot with a small video camera inside local establishments with very obvious lights. The camera work is occasionally shaky, especially in the opening sequence, and there is often rather loud ambient sound (bar patrons). In many ways, it is like a public access program with a better story line and better music. It is not the greatest movie as far as editing, and the script has points where you have to catch up with the plot twists. We now have the bad out of the way, so I can talk about random thoughts on the film. 🙂

Regardless of the acting, sound, lighting and script, I still found myself intrigued. A bit of my history might reveal why this statement is "profound". I have a degree in film and video and generally pay attention to all aspects of a film. If a film breaks the curtain, I am generally all over it. My wife has asked me to stop predicting the end of films, as most films use the standard three act model perfected by Syd Field. This film should have been a fertile ground for criticism, but I found myself going deeper instead.

Perhaps it is the rawness that had me looking for shining moments rather than tearing the film apart. It may also be a change in perspective due to my own adversity (Miranda’s battle with cancer). I am not sure. Either way, the values that would usually have me heavily criticizing had me compelled.

One thing that is interesting to see the differences in the salvation message. I am not sure whether these differences are completely cultural or at least partially due to the views of the missionaries involved. Either way, a Christian who is looking deeper at the film can learn some new information about how God works. There is something refreshing in a simplistic display of the message, especially one presented so genuinely (unlike our kabuki theater style presentations on Christian television in the United States).

I also found the film and interesting way to view another culture. In the bar scenes, there are signs with numbers like 16000 on the beer bottles. These are prices; It will cost you 16000 Zambia Kwacha to get a beer, or about $4.50 US, with the current rate of exchange. At the time the film was shot, the exchange rate was about 30000 to 1 (currently around 3500 to 1), which would mean 45 cents for a beer, in US dollars. As most things are negotiable in Zambia, the price is not necessarily what you would actually pay.

One of the reasons I was intrigued with the prices was a blog entry Scott Hanselman posted on Zimbabwe (Scott’s wife is from Zimbabwe). Zambia is just north of Zimbabwe and experiencing the same type of runaway inflation as Zimbabwe, only to a far lesser extent. Zambia is one of the countries the refugees from Zimbabwe are fleeing to. As you watch the film, you see both something modern (clothing) and something very old. You also see the ravages of poverty in the few scenes shot around the city.

The filmmakers hope that this film will help raise money for Zambia, or more precisely to aid their country. The film is being sold exclusively through the site. I am not sure they will reach this goal, at least not through human means.

Who is the best audience for this film? I can see two groups:

  1. Christians who wish to see a different perspective on the gospel message.
  2. Anyone interested in seeing life in Zambia, as revealed through the background of the film.

I must say this is the first film I have ever viewed from Zambia, much less anywhere in Africa other than South Africa. We simply do not see many films from this region in our video shelves. It is possibly this fact that explains why we know so little of the poverty that wracks this part of the world.

Peace and Grace,
Greg

Twittering Away My Time


I have been using Twitter now for about 4 months now and decided to give my feedback. For those not familiar, twitter is a "social networking" tool, but not in the traditional sense. Twitter allows you to follow people, which means get messages when they tell you what they are doing. When an individual posts a message, it is called a tweet (how tweet ;->).

The good: I often find out about the latest Microsoft releases through my twitter client. I also get reminders of events I have forgotten, as someone tweets the event and I end up thinking "oh crap, I forgot another one" (forgetting is easy now, with the focus on cancer — lost one of the kids we know on Sunday). It is also useful to send messages back and forth to friends, as long as you do not mind the fact that the world is potentially listening in.

The bad: Many of the twitterers over tweet (some of which are twidiots? — sorry word invented as part of a tweetfest), you can be inundated by way more messages than you can possibly digest. This can be cured by cutting down on notifications for the people you are following. Now, you might not find this to be a bad thing; but you might also live in a basement and never see sunlight. 🙂

The Ugly: I am really not interested when someone a world away is stuck in traffic. And, the tweet about tweeting from the bathroom was just a bit TMI for my tastes. Good thing there are only 140 characters allowed. 🙂

DISCLAIMER: Some people want to know when {Name removed to protect the innocent} is {activity removed for sanity}, not that there is anything wrong with that 😉

I have switched to passive tweeting, for the most part, and turned off the push subscription. I was getting a really bad signal to noise ratio for my tastes. Unfortunately, this puts the offense to go out and actively seek messages, which got me to add Tiny Twitter to my phone.

One thing that might be a neat feature is to be able to add a "category" to a tweet, so "I am just writing nonsense" could be flagged and I could decide to ignore it? Of course, tweeters could ignore the category bits, so I guess this is shaky, at best. Plus some actually do tweet via SMS (and not a client, shudder).

Currently, twitter is one of the best ways to follow Microsoft employees and you often get wind of a new product before the general public if you will listen. Not much news in 140 characters, but enough you can keep an eye out for the release. Since twitter has gone out like wildfire in the MS community, you are getting close to where you cannot live without it.

Peace and Grace,
Greg

Microsoft FINALLY Doing Something About SPAM


I write this as a backhanded complement. This morning, Microsoft is FINALLY doing something about the SPAM that has been invading its NNTP server for so many weeks. Today, alone, 21 SPAM headers (all from global-replica-watch.com) got to my client. Fortunately, the messages were deleted a bit later and no longer available on the server.

Over the past few weeks, this company has sent 20-30 SPAM messages daily to most of the Microsoft groups and nothing has been done about it. It was hypothesized that Microsoft was dumping NNTP for web support. I wonder if enough stink was raised at last weeks MVP Summit to finally get someone on the problem, or if it was just a filter misplaced. As the messages headers are not gone yet, I would assume that someone is manually deleting the SPAM.

I am glad that something is finally being done about the problem. I hate to see that it had to go on for so long before there was any action.

Peace and Grace,
Greg

CaseComplete


Entry Updated 4/17/2008 at 12:26 AM

It is rare I crow about a product the day I install it. I am generally a bit too cautious for that. But this is a case when I will take a chance that I am going to find something negative about a specific product later and take a chance.

Background

I am currently in the midst of writing use cases for my organization. I downloaded the MFC templates and started using them for Usage Scenarios. I did not like the way the usage scenarios were set up, so I added my own bits to the mix. Then, I slogged through the pain of setting up specific usage scenarios from a list, going up and down. I also spent a lot of time setting up actor descriptions, in another document, a glossary, in another document. The system is complex enough that I only had a small number of the many usage scenarios (MS use cases) written over many pages of documents.

This exercise was compounded when I had to refactor (there is a word you do not often here with the word use cases) I ended up completely reorganizing the document, wasting precious time. What a pain.

CaseComplete

I then noticed an offer for an NFR license to CaseComplete through the MVP program. I mention this primarily for full disclosure, as I detest blog writers who write bogus reviews to get free products. If you look back to last month, you will see how much I really detest when these jerkwads SPAM up UseNet to bolster their sales. Damn, why do people keep leaving these soapboxes lying around everywhere?

I am not sure I would have sought out CaseComplete if it were not for the offer, which would have been to my detriment. I installed it today and wish I had installed it the day I got the key.

The first thing I did is play with the software and I was rather unimpressed. I saw the potential, but it seemed I was treading a lot of water. We organize use cases by business desire, usage and development time. It can get complex, but this allows you to escalate features (perhaps less used and complex developed features) that an executive absolutely loves. This was not present in the product – or so I thought (more about this in a minute).

The general rule when I feel this way (treading water) is RTFM (Read The Funny Manual for those with sensibilities; you can make the F mean whatever you want). The manual, in this case, was the Tutorial.

After reading through a printed copy (had to take it with me somewhere and did not want to tote the laptop), and making some notes, I began to see the value in the product. One half hour after returning to the computer, I had created a number of users and use cases, organized in packages (and tied to users). I then began to click on the individual use cases and fill them in with details. Along the way, I created Glossary entries to define terms used in the company (Glossaries are mandatory, trust me).

Now, this is not the suggested order in the tutorial, but I was already past the define actor/define goal stage in the process. On the next assignment, where I have not already started in Word, I will probably try the tutorial route and do actors >> goals >> Generate Use Cases >> etc.

Back to treading water: When I created my first use case I noticed I was altering my use case priority to fit their scheme. Then, I RTFMed and found the ability to add my own custom fields to the Use Case definition. Shazam!

Organizing Use Cases and being able to easily refactor cases, add additional users and link to Use Cases, etc. is great. What really jazzed me and got me writing was the ability to create  a wide variety of documentation from the tool. And, if that is not enough, the ability to create my own document templates to slice and dice the use cases I have created. Just before I started this article, I also found an export to Microsoft Project option. I do not have enough information entered yet to get a good Project file, but the export feature will certainly save me a lot of time when I do. Woo Hoo!!!

Now, I was bummed for a bit when I realized I had a use case in the wrong package. Double clicking I found the use case number was still under another package number scheme. Bummer. So, I recreated it. After all it was only a name. RTFM again and I find a renumber feature. One quick dialog box and I can renumber any number of packages. And, since it is essentially a database, it renumbers all of the links as well. Oh, and I can turn on a track changes feature before turning this loose on another "business analyst" (hat I am wearing right now).

My Findings

What started as a painful week has now turned into fun. So much so, I am here are 10 PM getting ready to finish up a few more use cases. I was dreading getting back into this process, as it was taking WAY too much time.

The main sticking point, for many, will be the price point. At $595 for a single developer, it may be a bit steep, especially for smaller shops. On the other hand, if it saves you a couple of days work (especially at consultant’s rates), you have already paid for the product with requirements for a single project. And, after the Word document route, I think it can easily save the number of hours necessary at an advanced developer’s pay grade.

I have now googled and not found a decent alternative, so I cannot definitively state CaseComplete is the best of breed. I do see a wide variety of UML tools and templates, however, which lead me back to writing in Word. There is one called Use Case Studio, which sounds promising, but no screenshots. Perhaps someone has some other links?

After playing for a couple of hours, I am even more jazzed. While I did find that you can error out the product by attempting to play with the preferred field definitions (ouch), I absolutely love the documentation and the fact I can customize the look of the documentation. What was taking me days is only taking hours. In addition, as I play more, I am finding missing requirements just by the way the product is organized.

Peace and Grace,
Greg

Rules of Safe Coding


This is a generic kind of post about some things you should avoid in your code and how you can do them better: The rules. It is a rather generic kind of post, as I do not have a particular topic in mind. It is not an exhaustive list, but just a few things I have thought about as I deal with problems in the code base I am maintaining (and refactoring).

Test Your Strings

There is a temptation, in coding, to assume that today’s state is the norm for all time and that things will not deviate. We end up, therefore, writing code like this:

private string CleanAddress(string street)
{
    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

The idea being that 0 Any Street is a bogus address. The problem is one day you might switch from a reverse geocoder that places a 0 in an empty street to one that creates an empty string. The line with Substring(0,2) is now invalid, as there is no length.

If we adopt an assume nothing approach (safe coding), we would test the length of the string before attempting to substring it. Without any real refactoring, this can be as simple as:

private string CleanAddress(string street)
{
    if(street.Length < 2)
         return street;

    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

I am not here to argue the intent, only that this simple test helps us avoid tons of pain that is unnecessary. Even safer, we would want to also test that the string is not null. As we are probably appending this to another string somewhere, we will return an empty string.

private string CleanAddress(string street)
{
    if(street == null)
         return string.Empty;

    if(street.Length < 2)
         return street;

    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

This is not the best code in the world, so we should refactor, but it does check the two possible conditions that could error out this particular routine (null string or short string).

The choice of what to do with strings that do not adhere to this rule (raise own exception, ignore, return another value) is up to you, but do not assume that strings will always adhere to your expectations.

Input Checking

Okay, so I am cheating a bit here, as this is what you just did when you checked a string for null values before attempting to do something with it, or checked for length before you ripped it apart. But, the concept applies to more than just strings.

For example, what happens when you divide an integer by zero? You end up with an exception. While you can certainly handle this exception, and there are times you should, you can avoid the overhead of throwing an exception via bad code by checking the value. In fact, assuming we did want to throw an exception, we could do something like this:

private int DivideApples(int apples, int people)
{
     if(b == 0)
         throw new DivideByZeroException();

     return a/b;
}

Not extremely elegant, and I would certainly consider creating my own exception to give a better exception message than divide by zero, like "number of people dividing apples must be greater than zero" exception.

Output Checking

The opposite of input checking is output checking. This is checking a value before it goes out and adjusting output (like throw an exception) when the value is out of range. Many books suggest output checking every routine, but this practice is not extremely common.

Never Trust User Input

This one should go without saying. First, a user might not understand the rules of input and input something that is invalid. If you are going to perform a string operation on a user’s input, you can end up with an empty value that causes problems. This is what was covered in the first section.

Another issue is invalid data. Suppose you have an email field that is required. If you do not check the user input, you can end up with an email address like "joe". Does not blow up the application, but it gives you invalid data in your database and makes it impossible to send your user an email message.

Why talk about user input in a code blog entry? After all, you have the ability to do checking on the user interface through validation controls, right? Yes, but there are ways to circumvent user input. It is not as easy in ASP.NET as it was in earlier programming frameworks, but a clever enough hacker might find a way.

To sum this up, validate input on the server, even if it is checked on the client. It is better to spin a few cycles checking something that is correct than it is to throw an unfriendly exception for a user or let corrupt data make it to your database.

The Point

All of these are the same rule, essentially. Any time you have data that crosses a boundary, whether from user to your application or from layer to layer within your application, you should assume that it might not adhere to the rules your application adheres to. While it is less likely you will end up with issues when you are getting information out of your own database, it is not impossible, unless you are the only one who can EVER put data into the database … AND you remember your rules, no matter how far in the future you are still using the database. If the data was not created in your class, assume it could be incorrect. And, even then, you should consider being safe and coding a check in your code.

Peace and Grace,
Greg

TDD and the Separation of Concerns


I have seen a lot of posts, primarily in the forums, with people who are stating that TDD does not work for web applications. The typical post is something like this:

So I’m starting a new ASP.NET project and want to proceed with a test-driven
approach as much as possible. I have decided that I will ceratainly NOT use
the ASP.NET MVC framework, and instead go with the traditional web forms
approach.

The first issue I ran into is that, in order to create my unit tests, I’d
need mocks for Application, Session, Context etc. Thinking that through,
these mocks would have to do practically *everything* offered by the
Application, Session etc objects. Suddenly I’m *duplicating* much of what
ASP.NET is – all for the sake of unit testing my code. That seems like a
ridiculous amount of work to get automated unit tests.

Am I missing something? Or is it simply not possible to use TDD for ASP.NET
Web forms application development?

The questions here come from ignorance, both of what ASP.NET should be and what TDD is. To understand this better, let’s look at separation of concerns in a "traditional" (ie, non MVC) ASP.NET page.

Separation of Concerns

The separation of concerns, for this article, is breaking down constituent parts of your application. You have a UI (the ASPX page), some business logic (a business assembly) and data. We all should be familiar with this model, as Microsoft has been pushing n-tier development for ages.

So, you start with an ASPX page. In this case, we will assume it wants to be a blog page. We create the page (blog.aspx) and we end up with code that looks like this (providing, of course, we have not whacked the templates):

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;

public partial class blog : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {

    }
}

As this is a blog page, we need to fill it in with the blog information, so we alter the Page_Load to read:

    protected void Page_Load(object sender, EventArgs e)
    {
          BindBlogRepeater();
    }

And we create a routine that binds the blog to the Repeater control (the actual contents and how we might display this are inconsequential).

private void BindBlogRepeater()
{
    //First we need some data
    BlogRepeater.DataSource = GetBlogData().Tables[0].DefaultView;
    BlogRepeater.DataBind();
}

Then, we know we need some data so we create a routine to pull the data.

private DataSet GetBlogData()
{
    DataSet ds = new DataSet();
    SqlConnection connection =
                 
new SqlConnection(ConfigurationManager.ConnectionStrings
                                     
["myConnString"].ToString());
    //More code to fetch data here

    return ds;
}

Then, we write a forum post on how this is completely untestable using TDD. To which, I say, "you are absolutely right". But, we also have created a single executable, with UI, business and data tiers smashed together. No wonder why we cannot use TDD.

The problem, if we examine it deeper, is we were looking at the application from the UI end. The same problem exists if we look at the application from the database, BTW. Our storage, UI and application (or applications, if we are using a SOA approach) are different concepts. They should be designed differently.

A Better Way

In a better world, we would start with our facade layer (the moving parts that get the data to bind and place them in a library. We would write a test that ensures that the proper data is pulled on our test environment. The UI page could then be as simple as:

public partial class blog : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        BlogRepeater.DataSource = BlogFacade.GetBlogData();
        BlogRepeater.DataBind();
    }
}

The blog facade class would the contact the database tier object and get the data. This leaves me two classes I can start with tests. The first is my blog facade.

[TestMethod]
public void TestBlogFacade_GetBlogDataSet()
{
     //From set up
     BlogDataSet expected = GetBlogDataExpected();
     BlogDataSet actual = BlogFacade.GetBlogData();

     //Example elaborate test
     for(int i=0; i < expected.Blog.Rows.Length; i++)
     {
          BlogDataSet.BlogRow expectedRow =
                                  
(BlogDataSet.BlogRow)expected.Blog.Rows[i];
          BlogDataSet.BlogRow actualRow= (BlogDataSet.BlogRow)actual.Blog.Rows[i];

          for(int j = 0; j < expectedRow.Columns.Length; j++)
          {
               object expectedValue = expectedRow.Columns[j].Value;
               object actualValue= actualRow.Columns[j].Value;

               Assert.AreEqual(expectedValue, actualValue);
          }
     }
}

I would add some helper methods if this were a real test. The point, however, is not the test, but exercising the code in the blog facade. We can do the same for the data tier classes.

"But I can’t test the ASPX page!" is one complaint I hear to this method. The answer is that you can. One option is nUnitASP, although it is no longer a supported project. But, the other side of the coin is "what do you really have to test on your UI?" Think about this for a second.

  • The submit button might not work – That can be caught in your informal tests. In addition, the Microsoft delegates for event handlers rarely fail without doing something strange in your code. If you have almost no code in your page, the likelihood of CLR code failing is extremely slim.
  • It might not bind properly. Once again we are dealing with something that rarely happens without improper code in your UI.

If 99% of your code lies on the middle tier and back end, how much is likely to fail on your user interface? You might say 1%, but since the majority of the code is simply using Microsoft built methods, the likelihood of any of those methods failing is fairly slim.

Are you saying I should not test my UI? Not at all. But if the majority of your code is on the back end, most UI tests will be either UI testing or acceptance testing. Neither of these types of testing are completely automated in most shops.

Improving Our Tests

Thus far, the tests we have done would call the entire stack (most likely). While I do not have time to write a good example, it is far better to add mocks to the middle tier and data components to supply the data for you. I will have to blog on mocks later, as it is a big topic by itself.

But then I am not testing my data access? No, you are not. But the majority of that is handled by your database server, which reliably hands out data every time you ask. The likelihood of this code going bad is slim, just like event handlers on the UI. If you have stored procedures, you do have a chance of bad code, but unit tests on .NET code are not the best way to test stored procedures.

The mock framework to use is up to you. I have had good luck with Rhino Mocks, but there is even a lightweight mock "engine" included in nUnit these days.

Improving Our Application

On the library side, we have a blog facade that knows too much. The signature is like this:

public BlogDataSet GetBlogData()
{
}

This means that somewhere in the blog facade component, there is a line like this:

string connString = ConfigurationManager.ConnectionStrings["MyConnString"];

Ouch! We should change our signature to something more like this:

public BlogDataSet GetBlogData(string connectionString)
{
}

Note that I am not suggesting that we should not have configuration data lower than the UI. I am just stating that we should feed business layer components and not have them psychically determine how to contact other tiers. If this is UI specific configuration data, it should be passed. In most web application scenarios, that is precisely what this is. If you move from instance of a web application to another instance of the same web application (perhaps co-branded with segregated data), the connection string changes, but it is UI dependent. I am getting tangented. 🙂

The final goal you should keep in mind is this: What if my business owners wanted a XXX app with the same functionality, where XXX could mean Windows Forms application, WPF application, Silverlight application or even a Console Application. Anything in my code that ties me to a particular kind of UI is bad. Enough said.

MVC Framework

One of the neatest things about the MVC Framework, at least for me, is it forces developers to separate concerns. You cannot place code in your UI elements, which forces you to design UIs that act as UIs and business/data logic that acts as business/data logic.

On the downside, it does not force separation to the level it needs to be. You can still write WAY too much code in your controller. As the controller is essentially a facade the finds the model and marries it to a view (yes, this is an oversimplified explanation), it should be very thin. To Microsoft’s credit, they are showing how thin the controller should be in their examples (unlike old ASP.NET applications where they showed you how to jam all of your code either in the page or code behind).

Does this mean I think you should stray away from MVC? Certainly not. It allows you to get one step closer to the UI, shaving away the inability to test "event handlers". In MVC, you can test the controller methods, which are called when your user clicks buttons. I am saying that you can get some of the same benefits that MVC gives you (better yet, forces you into) if you consciously create your own applications as libraries.

For the record, I think the MVC Framework is a great direction for MS to head in. Once the tools catch up with the framework, it will be dynamite.

End Bits

While I rambled a bit, here are two things you should get out of this to make TDD work for your web applications:

  1. Keep your code behind VERY thin. The second you start writing code that does something more than bind (outgoing) or gather (incoming), you are beginning to cross the line towards a monolithic application.
  2. Think of your application as a set of libraries called by UI. Design the working parts of your applications as methods on your backend rather than extensions of event handlers on your front end.

Hope this helps.

Peace and Grace,
Greg