How Do I? – AJAX


I have been seeing a lot of people asking how they get AJAX working in their site. They can get a sample site up and running, but when they begin dropping AJAX in their own site, it fails.

The error encountered here is concerns the AJAX controls not being recognized and can be fixed rather easily by opening your test project and doing a bit of copying. I am first going to focus on getting the site working. We can then look at other issues.

Migrating Code to Make AJAX work

The first task is getting a current site working.

Copy the <ConfigSections> section

<ConfigSections> contains information necessary to wire up your AJAX application. You will need at least most of these to get things working. It is possible some of these are not important. NOTE: This is a 3.5 AJAX site. The version numbers are different in 2.0.

<configSections>
  <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
    <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
      <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/>
      <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
        <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" />
        <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
        <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
        <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" />
      </sectionGroup>
    </sectionGroup>
  </sectionGroup>
</configSections>

Nearly all of these are needed to get AJAX working. If you do not do JSON, you can drop off the jsonSerialization element, but I see no reason to do this. There are some others you can drop off as well, like the profileService and roleService. Note, however, that doing this really gives you nothing and will force you to add them back in if you ever decide to use these features.

Copy the <assemblies> section

In this section, you are setting up the working bits for the pages by importing the assemblies. There are ways around this (declaring page by page), but this is the easiest.

<add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>

If you are using additional AJAX bits, you may have other declarations here to copy over.

Copy the <pages> section

This is what allows you to add controls to your page. You can also register on a page by page basis and avoid copying this, but why?

<pages>
  <controls>
    <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  </controls>
</pages>

Copy the <httpHandlers> and <httpModules> sections

This section makes it possible for AJAX calls to be handled by your application. You will not get a compile error if you do not copy this, but you sure will not get a working application either.

<httpHandlers>
  <remove verb="*" path="*.asmx"/>
  <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/>
</httpHandlers>
<httpModules>
  <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</httpModules>

Set up for IIS 7?

There are two additional sections you will need if your site is going to IIS 7. These sections do very little for development and can, therefore, be called optional. If you are developing in .NET 3.5, I would copy them anyway.

<system.webServer>
  <validation validateIntegratedModeConfiguration="false"/>
  <modules>
    <remove name="ScriptModule" />
    <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
  </modules>
  <handlers>
    <remove name="WebServiceHandlerFactory-Integrated"/>
    <remove name="ScriptHandlerFactory" />
    <remove name="ScriptHandlerFactoryAppServices" />
    <remove name="ScriptResource" />
    <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode"
         type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode"
         type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
    <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
  </handlers>
</system.webServer>

<runtime>
  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <dependentAssembly>
      <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/>
      <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/>
    </dependentAssembly>
    <dependentAssembly>
      <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/>
      <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/>
    </dependentAssembly>
  </assemblyBinding>
</runtime>

Older Versions of ASP.NET AJAX

What if you are using an older version of ASP.NET AJAX? Simple, copy from a sample web that uses the older version. In general, the version number and possibly the PublicKeyToken are the main thing that will change. There may be a declaration or two not present in the older version, as well. If you copy from a sample site using the older version, however, you should be fine.

Additional AJAX Bits?

Since the 3.5 release, Microsoft has been working on newer extensions, including some AJAX bits. If you are using them, you will have to change the version numbers from 3.5.0.0 to 3.6.0.0. There is also a new section name, but you only need it if you are using dynamic data:

<section name="dynamicData" type="System.Web.DynamicData.DynamicDataControlsSection" requirePermission="false" allowDefinition="MachineToApplication" />

Under assemblies, the same. Change version number and only add the following if using dynamic data:

<add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>

With extensions, there are additional tags in the pages section, but you only need to add these if you are using the features (dynamic data or silverlight):

<add tagPrefix="asp" namespace="System.Web.UI.SilverlightControls" assembly="System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add tagPrefix="asp" namespace="System.Web.DynamicData" assembly="System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add tagPrefix="asp" tagName="DynamicFilter" src="~/App_Shared/DynamicDataFields/FilterUserControl.ascx" />

Under <httpHandlers> you only need to change the version numbers. Under <httpModules> you have additional tags, but only if you are using Dynamic Data or MVC:

<add name="DynamicDataModule" type="System.Web.DynamicData.DynamicDataHttpModule, System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
<add name="UrlRoutingModule" type="System.Web.Mvc.UrlRoutingModule, System.Web.Extensions, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

Under <system.web.extensions>, there are bits for Dynamic data. You do not have to add these for AJAX only. Finally, the <system.webServer> has additional bits if you are using either Dynamic Data or MVC. The only thing you will change for AJAX alone is the version number.

If you are using the AJAX toolkit, there is a designer tags in the <assemblies> section. Other than that, you are golden.

Summary

It is quite easy to prep an older site for AJAX, if you simply move web.config bits over from a site created as an AJAX site. After copy and paste, the controls should register fine.

I meant to find Scott Guthrie’s older blog entry on this, but have not had the chance. Maybe later? It was in the Atlas timeframe (code name for ASP.NET AJAX), so the bits are older (version numbers different primarily).

Peace and Grace,
Greg

James Tembo, Detective – A Perspective on Zambia


A few weeks ago, I wrote a review for the movie The Great Debaters. In response, I was contacted by Kevin Hansston to look at a film called James Tembo, Detective. I ordered a copy off their site that night(http://www.jamestembo.com) and received it a couple of days ago. While many might have asked for a review copy, I did not want to have any idea that I might have been writing a "for hire" review.

The film is a re-imagining of the Resurrection story in modern Zambia. In the story, a prophet named Joseph has been killed by the religious leaders, who hire a mercenary "detective" named James Tembo. His job is to find the body, or at least write a report that glosses it over.

If you are looking for a slick Hollywood film with professional acting, this is not your vehicle. The production values are a bit low, as most of the film has been shot with a small video camera inside local establishments with very obvious lights. The camera work is occasionally shaky, especially in the opening sequence, and there is often rather loud ambient sound (bar patrons). In many ways, it is like a public access program with a better story line and better music. It is not the greatest movie as far as editing, and the script has points where you have to catch up with the plot twists. We now have the bad out of the way, so I can talk about random thoughts on the film. 🙂

Regardless of the acting, sound, lighting and script, I still found myself intrigued. A bit of my history might reveal why this statement is "profound". I have a degree in film and video and generally pay attention to all aspects of a film. If a film breaks the curtain, I am generally all over it. My wife has asked me to stop predicting the end of films, as most films use the standard three act model perfected by Syd Field. This film should have been a fertile ground for criticism, but I found myself going deeper instead.

Perhaps it is the rawness that had me looking for shining moments rather than tearing the film apart. It may also be a change in perspective due to my own adversity (Miranda’s battle with cancer). I am not sure. Either way, the values that would usually have me heavily criticizing had me compelled.

One thing that is interesting to see the differences in the salvation message. I am not sure whether these differences are completely cultural or at least partially due to the views of the missionaries involved. Either way, a Christian who is looking deeper at the film can learn some new information about how God works. There is something refreshing in a simplistic display of the message, especially one presented so genuinely (unlike our kabuki theater style presentations on Christian television in the United States).

I also found the film and interesting way to view another culture. In the bar scenes, there are signs with numbers like 16000 on the beer bottles. These are prices; It will cost you 16000 Zambia Kwacha to get a beer, or about $4.50 US, with the current rate of exchange. At the time the film was shot, the exchange rate was about 30000 to 1 (currently around 3500 to 1), which would mean 45 cents for a beer, in US dollars. As most things are negotiable in Zambia, the price is not necessarily what you would actually pay.

One of the reasons I was intrigued with the prices was a blog entry Scott Hanselman posted on Zimbabwe (Scott’s wife is from Zimbabwe). Zambia is just north of Zimbabwe and experiencing the same type of runaway inflation as Zimbabwe, only to a far lesser extent. Zambia is one of the countries the refugees from Zimbabwe are fleeing to. As you watch the film, you see both something modern (clothing) and something very old. You also see the ravages of poverty in the few scenes shot around the city.

The filmmakers hope that this film will help raise money for Zambia, or more precisely to aid their country. The film is being sold exclusively through the site. I am not sure they will reach this goal, at least not through human means.

Who is the best audience for this film? I can see two groups:

  1. Christians who wish to see a different perspective on the gospel message.
  2. Anyone interested in seeing life in Zambia, as revealed through the background of the film.

I must say this is the first film I have ever viewed from Zambia, much less anywhere in Africa other than South Africa. We simply do not see many films from this region in our video shelves. It is possibly this fact that explains why we know so little of the poverty that wracks this part of the world.

Peace and Grace,
Greg

Twittering Away My Time


I have been using Twitter now for about 4 months now and decided to give my feedback. For those not familiar, twitter is a "social networking" tool, but not in the traditional sense. Twitter allows you to follow people, which means get messages when they tell you what they are doing. When an individual posts a message, it is called a tweet (how tweet ;->).

The good: I often find out about the latest Microsoft releases through my twitter client. I also get reminders of events I have forgotten, as someone tweets the event and I end up thinking "oh crap, I forgot another one" (forgetting is easy now, with the focus on cancer — lost one of the kids we know on Sunday). It is also useful to send messages back and forth to friends, as long as you do not mind the fact that the world is potentially listening in.

The bad: Many of the twitterers over tweet (some of which are twidiots? — sorry word invented as part of a tweetfest), you can be inundated by way more messages than you can possibly digest. This can be cured by cutting down on notifications for the people you are following. Now, you might not find this to be a bad thing; but you might also live in a basement and never see sunlight. 🙂

The Ugly: I am really not interested when someone a world away is stuck in traffic. And, the tweet about tweeting from the bathroom was just a bit TMI for my tastes. Good thing there are only 140 characters allowed. 🙂

DISCLAIMER: Some people want to know when {Name removed to protect the innocent} is {activity removed for sanity}, not that there is anything wrong with that 😉

I have switched to passive tweeting, for the most part, and turned off the push subscription. I was getting a really bad signal to noise ratio for my tastes. Unfortunately, this puts the offense to go out and actively seek messages, which got me to add Tiny Twitter to my phone.

One thing that might be a neat feature is to be able to add a "category" to a tweet, so "I am just writing nonsense" could be flagged and I could decide to ignore it? Of course, tweeters could ignore the category bits, so I guess this is shaky, at best. Plus some actually do tweet via SMS (and not a client, shudder).

Currently, twitter is one of the best ways to follow Microsoft employees and you often get wind of a new product before the general public if you will listen. Not much news in 140 characters, but enough you can keep an eye out for the release. Since twitter has gone out like wildfire in the MS community, you are getting close to where you cannot live without it.

Peace and Grace,
Greg

Microsoft FINALLY Doing Something About SPAM


I write this as a backhanded complement. This morning, Microsoft is FINALLY doing something about the SPAM that has been invading its NNTP server for so many weeks. Today, alone, 21 SPAM headers (all from global-replica-watch.com) got to my client. Fortunately, the messages were deleted a bit later and no longer available on the server.

Over the past few weeks, this company has sent 20-30 SPAM messages daily to most of the Microsoft groups and nothing has been done about it. It was hypothesized that Microsoft was dumping NNTP for web support. I wonder if enough stink was raised at last weeks MVP Summit to finally get someone on the problem, or if it was just a filter misplaced. As the messages headers are not gone yet, I would assume that someone is manually deleting the SPAM.

I am glad that something is finally being done about the problem. I hate to see that it had to go on for so long before there was any action.

Peace and Grace,
Greg

CaseComplete


Entry Updated 4/17/2008 at 12:26 AM

It is rare I crow about a product the day I install it. I am generally a bit too cautious for that. But this is a case when I will take a chance that I am going to find something negative about a specific product later and take a chance.

Background

I am currently in the midst of writing use cases for my organization. I downloaded the MFC templates and started using them for Usage Scenarios. I did not like the way the usage scenarios were set up, so I added my own bits to the mix. Then, I slogged through the pain of setting up specific usage scenarios from a list, going up and down. I also spent a lot of time setting up actor descriptions, in another document, a glossary, in another document. The system is complex enough that I only had a small number of the many usage scenarios (MS use cases) written over many pages of documents.

This exercise was compounded when I had to refactor (there is a word you do not often here with the word use cases) I ended up completely reorganizing the document, wasting precious time. What a pain.

CaseComplete

I then noticed an offer for an NFR license to CaseComplete through the MVP program. I mention this primarily for full disclosure, as I detest blog writers who write bogus reviews to get free products. If you look back to last month, you will see how much I really detest when these jerkwads SPAM up UseNet to bolster their sales. Damn, why do people keep leaving these soapboxes lying around everywhere?

I am not sure I would have sought out CaseComplete if it were not for the offer, which would have been to my detriment. I installed it today and wish I had installed it the day I got the key.

The first thing I did is play with the software and I was rather unimpressed. I saw the potential, but it seemed I was treading a lot of water. We organize use cases by business desire, usage and development time. It can get complex, but this allows you to escalate features (perhaps less used and complex developed features) that an executive absolutely loves. This was not present in the product – or so I thought (more about this in a minute).

The general rule when I feel this way (treading water) is RTFM (Read The Funny Manual for those with sensibilities; you can make the F mean whatever you want). The manual, in this case, was the Tutorial.

After reading through a printed copy (had to take it with me somewhere and did not want to tote the laptop), and making some notes, I began to see the value in the product. One half hour after returning to the computer, I had created a number of users and use cases, organized in packages (and tied to users). I then began to click on the individual use cases and fill them in with details. Along the way, I created Glossary entries to define terms used in the company (Glossaries are mandatory, trust me).

Now, this is not the suggested order in the tutorial, but I was already past the define actor/define goal stage in the process. On the next assignment, where I have not already started in Word, I will probably try the tutorial route and do actors >> goals >> Generate Use Cases >> etc.

Back to treading water: When I created my first use case I noticed I was altering my use case priority to fit their scheme. Then, I RTFMed and found the ability to add my own custom fields to the Use Case definition. Shazam!

Organizing Use Cases and being able to easily refactor cases, add additional users and link to Use Cases, etc. is great. What really jazzed me and got me writing was the ability to create  a wide variety of documentation from the tool. And, if that is not enough, the ability to create my own document templates to slice and dice the use cases I have created. Just before I started this article, I also found an export to Microsoft Project option. I do not have enough information entered yet to get a good Project file, but the export feature will certainly save me a lot of time when I do. Woo Hoo!!!

Now, I was bummed for a bit when I realized I had a use case in the wrong package. Double clicking I found the use case number was still under another package number scheme. Bummer. So, I recreated it. After all it was only a name. RTFM again and I find a renumber feature. One quick dialog box and I can renumber any number of packages. And, since it is essentially a database, it renumbers all of the links as well. Oh, and I can turn on a track changes feature before turning this loose on another "business analyst" (hat I am wearing right now).

My Findings

What started as a painful week has now turned into fun. So much so, I am here are 10 PM getting ready to finish up a few more use cases. I was dreading getting back into this process, as it was taking WAY too much time.

The main sticking point, for many, will be the price point. At $595 for a single developer, it may be a bit steep, especially for smaller shops. On the other hand, if it saves you a couple of days work (especially at consultant’s rates), you have already paid for the product with requirements for a single project. And, after the Word document route, I think it can easily save the number of hours necessary at an advanced developer’s pay grade.

I have now googled and not found a decent alternative, so I cannot definitively state CaseComplete is the best of breed. I do see a wide variety of UML tools and templates, however, which lead me back to writing in Word. There is one called Use Case Studio, which sounds promising, but no screenshots. Perhaps someone has some other links?

After playing for a couple of hours, I am even more jazzed. While I did find that you can error out the product by attempting to play with the preferred field definitions (ouch), I absolutely love the documentation and the fact I can customize the look of the documentation. What was taking me days is only taking hours. In addition, as I play more, I am finding missing requirements just by the way the product is organized.

Peace and Grace,
Greg

Rules of Safe Coding


This is a generic kind of post about some things you should avoid in your code and how you can do them better: The rules. It is a rather generic kind of post, as I do not have a particular topic in mind. It is not an exhaustive list, but just a few things I have thought about as I deal with problems in the code base I am maintaining (and refactoring).

Test Your Strings

There is a temptation, in coding, to assume that today’s state is the norm for all time and that things will not deviate. We end up, therefore, writing code like this:

private string CleanAddress(string street)
{
    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

The idea being that 0 Any Street is a bogus address. The problem is one day you might switch from a reverse geocoder that places a 0 in an empty street to one that creates an empty string. The line with Substring(0,2) is now invalid, as there is no length.

If we adopt an assume nothing approach (safe coding), we would test the length of the string before attempting to substring it. Without any real refactoring, this can be as simple as:

private string CleanAddress(string street)
{
    if(street.Length < 2)
         return street;

    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

I am not here to argue the intent, only that this simple test helps us avoid tons of pain that is unnecessary. Even safer, we would want to also test that the string is not null. As we are probably appending this to another string somewhere, we will return an empty string.

private string CleanAddress(string street)
{
    if(street == null)
         return string.Empty;

    if(street.Length < 2)
         return street;

    if(street.Substring(0,2) == "0 ")
        return street(2);
    else
        return street;
}

This is not the best code in the world, so we should refactor, but it does check the two possible conditions that could error out this particular routine (null string or short string).

The choice of what to do with strings that do not adhere to this rule (raise own exception, ignore, return another value) is up to you, but do not assume that strings will always adhere to your expectations.

Input Checking

Okay, so I am cheating a bit here, as this is what you just did when you checked a string for null values before attempting to do something with it, or checked for length before you ripped it apart. But, the concept applies to more than just strings.

For example, what happens when you divide an integer by zero? You end up with an exception. While you can certainly handle this exception, and there are times you should, you can avoid the overhead of throwing an exception via bad code by checking the value. In fact, assuming we did want to throw an exception, we could do something like this:

private int DivideApples(int apples, int people)
{
     if(b == 0)
         throw new DivideByZeroException();

     return a/b;
}

Not extremely elegant, and I would certainly consider creating my own exception to give a better exception message than divide by zero, like "number of people dividing apples must be greater than zero" exception.

Output Checking

The opposite of input checking is output checking. This is checking a value before it goes out and adjusting output (like throw an exception) when the value is out of range. Many books suggest output checking every routine, but this practice is not extremely common.

Never Trust User Input

This one should go without saying. First, a user might not understand the rules of input and input something that is invalid. If you are going to perform a string operation on a user’s input, you can end up with an empty value that causes problems. This is what was covered in the first section.

Another issue is invalid data. Suppose you have an email field that is required. If you do not check the user input, you can end up with an email address like "joe". Does not blow up the application, but it gives you invalid data in your database and makes it impossible to send your user an email message.

Why talk about user input in a code blog entry? After all, you have the ability to do checking on the user interface through validation controls, right? Yes, but there are ways to circumvent user input. It is not as easy in ASP.NET as it was in earlier programming frameworks, but a clever enough hacker might find a way.

To sum this up, validate input on the server, even if it is checked on the client. It is better to spin a few cycles checking something that is correct than it is to throw an unfriendly exception for a user or let corrupt data make it to your database.

The Point

All of these are the same rule, essentially. Any time you have data that crosses a boundary, whether from user to your application or from layer to layer within your application, you should assume that it might not adhere to the rules your application adheres to. While it is less likely you will end up with issues when you are getting information out of your own database, it is not impossible, unless you are the only one who can EVER put data into the database … AND you remember your rules, no matter how far in the future you are still using the database. If the data was not created in your class, assume it could be incorrect. And, even then, you should consider being safe and coding a check in your code.

Peace and Grace,
Greg

TDD and the Separation of Concerns


I have seen a lot of posts, primarily in the forums, with people who are stating that TDD does not work for web applications. The typical post is something like this:

So I’m starting a new ASP.NET project and want to proceed with a test-driven
approach as much as possible. I have decided that I will ceratainly NOT use
the ASP.NET MVC framework, and instead go with the traditional web forms
approach.

The first issue I ran into is that, in order to create my unit tests, I’d
need mocks for Application, Session, Context etc. Thinking that through,
these mocks would have to do practically *everything* offered by the
Application, Session etc objects. Suddenly I’m *duplicating* much of what
ASP.NET is – all for the sake of unit testing my code. That seems like a
ridiculous amount of work to get automated unit tests.

Am I missing something? Or is it simply not possible to use TDD for ASP.NET
Web forms application development?

The questions here come from ignorance, both of what ASP.NET should be and what TDD is. To understand this better, let’s look at separation of concerns in a "traditional" (ie, non MVC) ASP.NET page.

Separation of Concerns

The separation of concerns, for this article, is breaking down constituent parts of your application. You have a UI (the ASPX page), some business logic (a business assembly) and data. We all should be familiar with this model, as Microsoft has been pushing n-tier development for ages.

So, you start with an ASPX page. In this case, we will assume it wants to be a blog page. We create the page (blog.aspx) and we end up with code that looks like this (providing, of course, we have not whacked the templates):

using System;
using System.Collections;
using System.Configuration;
using System.Data;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;

public partial class blog : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {

    }
}

As this is a blog page, we need to fill it in with the blog information, so we alter the Page_Load to read:

    protected void Page_Load(object sender, EventArgs e)
    {
          BindBlogRepeater();
    }

And we create a routine that binds the blog to the Repeater control (the actual contents and how we might display this are inconsequential).

private void BindBlogRepeater()
{
    //First we need some data
    BlogRepeater.DataSource = GetBlogData().Tables[0].DefaultView;
    BlogRepeater.DataBind();
}

Then, we know we need some data so we create a routine to pull the data.

private DataSet GetBlogData()
{
    DataSet ds = new DataSet();
    SqlConnection connection =
                 
new SqlConnection(ConfigurationManager.ConnectionStrings
                                     
["myConnString"].ToString());
    //More code to fetch data here

    return ds;
}

Then, we write a forum post on how this is completely untestable using TDD. To which, I say, "you are absolutely right". But, we also have created a single executable, with UI, business and data tiers smashed together. No wonder why we cannot use TDD.

The problem, if we examine it deeper, is we were looking at the application from the UI end. The same problem exists if we look at the application from the database, BTW. Our storage, UI and application (or applications, if we are using a SOA approach) are different concepts. They should be designed differently.

A Better Way

In a better world, we would start with our facade layer (the moving parts that get the data to bind and place them in a library. We would write a test that ensures that the proper data is pulled on our test environment. The UI page could then be as simple as:

public partial class blog : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        BlogRepeater.DataSource = BlogFacade.GetBlogData();
        BlogRepeater.DataBind();
    }
}

The blog facade class would the contact the database tier object and get the data. This leaves me two classes I can start with tests. The first is my blog facade.

[TestMethod]
public void TestBlogFacade_GetBlogDataSet()
{
     //From set up
     BlogDataSet expected = GetBlogDataExpected();
     BlogDataSet actual = BlogFacade.GetBlogData();

     //Example elaborate test
     for(int i=0; i < expected.Blog.Rows.Length; i++)
     {
          BlogDataSet.BlogRow expectedRow =
                                  
(BlogDataSet.BlogRow)expected.Blog.Rows[i];
          BlogDataSet.BlogRow actualRow= (BlogDataSet.BlogRow)actual.Blog.Rows[i];

          for(int j = 0; j < expectedRow.Columns.Length; j++)
          {
               object expectedValue = expectedRow.Columns[j].Value;
               object actualValue= actualRow.Columns[j].Value;

               Assert.AreEqual(expectedValue, actualValue);
          }
     }
}

I would add some helper methods if this were a real test. The point, however, is not the test, but exercising the code in the blog facade. We can do the same for the data tier classes.

"But I can’t test the ASPX page!" is one complaint I hear to this method. The answer is that you can. One option is nUnitASP, although it is no longer a supported project. But, the other side of the coin is "what do you really have to test on your UI?" Think about this for a second.

  • The submit button might not work – That can be caught in your informal tests. In addition, the Microsoft delegates for event handlers rarely fail without doing something strange in your code. If you have almost no code in your page, the likelihood of CLR code failing is extremely slim.
  • It might not bind properly. Once again we are dealing with something that rarely happens without improper code in your UI.

If 99% of your code lies on the middle tier and back end, how much is likely to fail on your user interface? You might say 1%, but since the majority of the code is simply using Microsoft built methods, the likelihood of any of those methods failing is fairly slim.

Are you saying I should not test my UI? Not at all. But if the majority of your code is on the back end, most UI tests will be either UI testing or acceptance testing. Neither of these types of testing are completely automated in most shops.

Improving Our Tests

Thus far, the tests we have done would call the entire stack (most likely). While I do not have time to write a good example, it is far better to add mocks to the middle tier and data components to supply the data for you. I will have to blog on mocks later, as it is a big topic by itself.

But then I am not testing my data access? No, you are not. But the majority of that is handled by your database server, which reliably hands out data every time you ask. The likelihood of this code going bad is slim, just like event handlers on the UI. If you have stored procedures, you do have a chance of bad code, but unit tests on .NET code are not the best way to test stored procedures.

The mock framework to use is up to you. I have had good luck with Rhino Mocks, but there is even a lightweight mock "engine" included in nUnit these days.

Improving Our Application

On the library side, we have a blog facade that knows too much. The signature is like this:

public BlogDataSet GetBlogData()
{
}

This means that somewhere in the blog facade component, there is a line like this:

string connString = ConfigurationManager.ConnectionStrings["MyConnString"];

Ouch! We should change our signature to something more like this:

public BlogDataSet GetBlogData(string connectionString)
{
}

Note that I am not suggesting that we should not have configuration data lower than the UI. I am just stating that we should feed business layer components and not have them psychically determine how to contact other tiers. If this is UI specific configuration data, it should be passed. In most web application scenarios, that is precisely what this is. If you move from instance of a web application to another instance of the same web application (perhaps co-branded with segregated data), the connection string changes, but it is UI dependent. I am getting tangented. 🙂

The final goal you should keep in mind is this: What if my business owners wanted a XXX app with the same functionality, where XXX could mean Windows Forms application, WPF application, Silverlight application or even a Console Application. Anything in my code that ties me to a particular kind of UI is bad. Enough said.

MVC Framework

One of the neatest things about the MVC Framework, at least for me, is it forces developers to separate concerns. You cannot place code in your UI elements, which forces you to design UIs that act as UIs and business/data logic that acts as business/data logic.

On the downside, it does not force separation to the level it needs to be. You can still write WAY too much code in your controller. As the controller is essentially a facade the finds the model and marries it to a view (yes, this is an oversimplified explanation), it should be very thin. To Microsoft’s credit, they are showing how thin the controller should be in their examples (unlike old ASP.NET applications where they showed you how to jam all of your code either in the page or code behind).

Does this mean I think you should stray away from MVC? Certainly not. It allows you to get one step closer to the UI, shaving away the inability to test "event handlers". In MVC, you can test the controller methods, which are called when your user clicks buttons. I am saying that you can get some of the same benefits that MVC gives you (better yet, forces you into) if you consciously create your own applications as libraries.

For the record, I think the MVC Framework is a great direction for MS to head in. Once the tools catch up with the framework, it will be dynamite.

End Bits

While I rambled a bit, here are two things you should get out of this to make TDD work for your web applications:

  1. Keep your code behind VERY thin. The second you start writing code that does something more than bind (outgoing) or gather (incoming), you are beginning to cross the line towards a monolithic application.
  2. Think of your application as a set of libraries called by UI. Design the working parts of your applications as methods on your backend rather than extensions of event handlers on your front end.

Hope this helps.

Peace and Grace,
Greg

Geocaching and Waymarking


As many of you know, I got a new GPS device in February. I had been introduced to geocaching by a friend, Rebel Bailey (nickelpickle on geocaching.com) and I have been hooked. I found out last week that I could get a free premium one month membership ($3 value) by registering my unit, even though it is not a Colorado (shhhh!!!). I am going to buy a premium membership when it expires, as it gives me so many more features.

Here are my geocaching stats:

View my Profile

Just recently, I found out that Groundspeak has another site called waymarking.com, where you go and take pictures of different types of marks. As this activity is not as advertised, I have found plenty of sites that have not been waymarked yet. In many ways, this is more fun than geocaching. Nah! I take that back, as searching for something hidden adds an element.

Here are my waymarking stats:

Peace and Grace,
Greg

Understanding Lambda Expressions


Examining the forums lately, I have noticed a lot of people have a problem getting their heads wrapped around lambda expressions. It is my belief that understanding the context and history are the key to understanding lambda expressions. So, let’s see if we can illuminate this subject.

Delegates

The first step to understanding Lambda expressions is understanding delegates. A delegate is a pointer to a function. In C# 1.x, you most often saw delegates expressed this way:

//Delegate definition
delegate void MyDelegate(int i, string s);

//method that adheres to the delegate
public void MyMethod(int i, string s) { //More here }

//Instantiating the delegate
MyDelegate a = new MyDelegate(MyMethod);

Since we are hooking the pointer up at run time, rather than straight calling the method straight out, you have the ability to dynamically hook up delegates. This is not really important for our talk, but this is why delegates are useful for events and multicasting of information. Back to our talk. By C# 2.0, Microsoft was still using delegates, but simplified the syntax a bit, as we can now do this:

//Simple instantiation
MyDelegate a = MyMethod;

By 2.0, we could also use delegates to "inject code" into a method. Or, if you want to look at this in another direction, we are creating a callback to delegated code. Before doing that, let’s look at a common refactoring:

//Before refactoring
public void EnumeratedStrings()
{
    foreach(string s in MyStringArray)
    {
         //Do the processing here
    }
}

//After refactoring
public void EnumeratedStrings()
{
    foreach(string s in MyStringArray)
    {
         ProcessString(s)
    }
}

private void ProcessString(string s)
{
    //Do the processing here

}

In this example, we started with some code in one, or more, methods and moved it into its own routine. This is done for reuse or because of a code smell (repeated code). Now, suppose we had multiple ways of processing the string. We can either write multiple methods to enumerate that call multiple methods … or we can use a delegate as input for our method and sort this out at run time. 

public delegate void MyDelegate(string s);

public void EnumeratedStrings(MyDelegate callback)
{
    foreach(string s in MyStringArray)
    {
         callback(s);
    }
}

Now, I have the ability to call the process string method, but doing this:

MyDelegate a = ProcessString;

But, I can also do something more like this:

MyDelegate b = ProcessStringDifferently;

I can also do both, if I am so inclined, but that is a tangent.

Anonymous Methods

The next step in the process is anonymous methods. In anonymous methods, we declare the delegate along with its code as we pass it into a method. It is anonymous as it is determined by context and not by wiring the delegate. This allows us to determine what code we need to run based on a our needs at the time of development.

Say, for example, we want know we want to iterate through a string array and do work on it, but the work can vary depending on what code is using the method. We might do something like this:

public class AnonMethodExample
{
    public delegate void MyDelegate(string text);
    public string[] names;

    public AnonMethodExample()
    {
        names = new string[3]{"Greg", "Henry", "Alan"};
        DoWorkOnStringArray(delegate(string name) { Console.WriteLine(name); });
    }

    void DoWorkOnStringArray(MyDelegate callback)
    {
        for(int i = 0;i<names.Length;i++)
        {
            callback(names[i]);
        }
    }
}

The output here is:

Greg
Henry
Alan

When the class is instantiated, each name is printed. But let’s say some other code just wants the length of the names along with the name. I do not have to change the method, I simply change the call.

public class AnonMethodExample2
{
    public delegate void MyDelegate(string text);
    public string[] names;

    public AnonMethodExample2()
    {
        names = new string[3] { "Greg", "Henry", "Alan" };
        DoWorkOnStringArray(delegate(string name)
        { Console.WriteLine("{0} has {1} characters", name, name.Length); });
    }

    void DoWorkOnStringArray(MyDelegate callback)
    {
        for (int i = 0; i < names.Length; i++)
        {
            callback(names[i]);
        }
    }
}

This one now outputs

Greg has 4 characters
Henry has 5 characters
Alan has 4 characters

The real power here is the DoWorkOnStringArray() can exist in a different class or even a different library, allowing the calling application to determine what it wants to happen at runtime. Of course, in the real world, the DoWorkOnStringArray() method would do something more than simply call the delegate code from the anonymous method.

This method is obviously not anonymous from the calling application, but it is called anonymous as the called method does not know what code it is running until it runs it (essentially). I can still explicitly declare the delegate and pass it, but anonymous method syntax allows me to determine the code as I write it. If you do find you are injecting the same code from two places, you should refactor to a more traditional delegate syntax, as it encourage reuse, but the anonymous method allows you have the same routine perform work in different ways. If you want a more real world implementation, you should read Dan Wahlin’s blog entry on the subject, called the Power of Anonymous Methods.

Generics

Before moving to lambda expressions, we have one more concept that is also from C# 2.0: Generics. Let’s go back a few years ago to .NET 1.x. In order to express something "generically" we had to box as an object. For example:

object[] os = new object[2];
int i = 1;
string s = "one";

os[0] = i;
os[1] = s;

As all "objects" in .NET stem from System.Object, we can generically refer to every object as System.Object. That is why the above code works. This lead us to creating lists and dictionaries, etc. that boxed to an object, even when we had a specific type. For example, suppose we use an integer for a key and a double for the value. In .NET 1.x, both would be objects. The danger here comes when we do something like so:

dict.Add(1, "this should be a number");

It blows up on the way out, rather than fire an exception when loaded, as it should:

double val = dict[1];

With 2.0, we have the ability to use Dictionary<TKey, TValue> to set our types. This saves us, as it will catch the error earlier in the cycle:

Dictionary<int, double> dict = new Dictionary<int, double>();

//Will not compile because of this line
dict.Add(1, "this should be a number");

The benefit here is I can create multiple strongly typed implementations off a single generic, rather than repeating the same code for each type that could possibly need the generic, whether it be a list, dictionary or otherwise.

Lambda Expressions

A Lambda expression is a shortened syntax for an anonymous method. Since I am running out of time, I will use an example from the book Introducing LINQ (MSPress), which you can download for free. It starts with the concept of an Aggregate generic and a generic delegate.

public class AggDelegate
{
    public List<int> Values;
    delegate T Func<T>(T a, T b);
    static T Aggregate<T>(List<T> l, Func<T> f)
    {
        T result = default(T);
        bool firstLoop = true;
        foreach (T value in l)
        {
            if (firstLoop)
            {
                result = value;
                firstLoop = false;
            }
            else
            {
                result = f(result, value);
            }
        }
        return result;
    }
}

The example then uses an anonymous method to sum the values of the list, by passing in the delegate (int a, int b) { return a + b; } to the Aggregate. Here is the example:

public static void Demo()
{
    AggDelegate l = new AggDelegate();
    int sum;
    sum = Aggregate(
    l.Values,
    delegate(int a, int b) { return a + b; }
    );
    Console.WriteLine("Sum = {0}", sum);
}

If you put back up to the AggDelegate definition, you will see that it is taking the entire list and aggregating it. On the first loop, it sets the result to the first value. From there is keeps adding the value using the injected sum method. To move this to lambda expression, we simple drop the word delegate and add the pointer to the code being run for the values expressed. Lets compare:

//Original
delegate(int a, int b) { return a + b; }

//Explicitly typed variables in a lambda expression
(int a, int b) => { return a + b; }

We can further simplify this by implicitly typing the input variables:

//Implicitly type variables in a lambda expression
(a, b) => { return a + b; }

Now, since we know that we are returning values with a lambda expression, we can take this a bit farther and not include the method definition in the format of {} or include the return keyword. This looks like this:

//Lambda Expression
(a, b) => a + b

Here is the final journey. Each of these statements declares the same thing:

//Original
delegate(int a, int b) { return a + b; }

//Explicitly typed variables in a lambda expression
(int a, int b) => { return a + b; }

//Implicitly type variables in a lambda expression
(a, b) => { return a + b; }

//Lambda Expression
(a, b) => a + b

Pretty kewl, eh? Hopefully now, you see that Lambda expressions are not really that complex after all. They are just shorthand for things you already know how to use, at least on a theoretical level. One more snippet from the book Introducing LINQ. This one shows all of the ways of expressing in Lambda (cute play on words):

( int a, int b ) => { return a + b; }       // Explicitly typed, statement body
( int a, int b ) => a + b;                  // Explicitly typed, expression body
( a, b ) => { return a + b; }               // Implicitly typed, statement body
( a, b ) => a + b                           // Implicitly typed, expression body
( x ) => sum += x                           // Single parameter with parentheses
x => sum += x                               // Single parameter no parentheses
() => sum + 1                               // No parameters

In my next blog entry on this series, I will take this a step farther and go into extension methods, as delegates, in the form of anonymous methods/lambda expressions are the key for understanding the hows and whys of extension methods.

Peace and Grace,
Greg

Home schooling – lessons from the first year


I am a bit apprehensive about posting this message, as I run a great risk of ticking off a lot of people by disagreeing with their world view. This one is especially dangerous, as I sit somewhere in the middle when it comes to my thoughts and beliefs on science. Rather than pontificate to the point of a disclaimer, I might as well get on with it.

Why We Home school

I am not sure how many people know this, but my wife and I are home schooling our children. This is a bit of an unfair statement, as others have stepped in to help us home school while we go through Miranda’s treatments (Miranda was diagnosed with Ewing’s Sarcoma, a rare childhood cancer, on September 6, 2007 – read more here).

Overall, many home school parents made the decision to home school based on schools teaching their children a world view other than their own. This is primarily based on the fact that schools are secular and the parents are religious. As an aside, I believe schools should be secular, as I do not believe we should use schooling to indoctrinate people to any one religious viewpoint; on the other hand, I believe schools should be more forthcoming with the facts (and "facts") of science, as there are many sketchy "facts" being presented as facts. More on that later.

While this [religious versus secular] has a bit of influence to us, our decision was based on two primary criteria:

  1. We felt the school was no longer teaching our children after they were learning above their grade level. There were two reasons for this:
    1. The classrooms are segregated, so it is hard for teachers to get materials from a higher grade.
    2. Teachers are being forced to bring all children to a minimum level to pass them, requiring a greater focus on those who are "not getting it", often to the detriment of those that are. I could rant on the pluses and minuses of "No Child Left Behind", but will leave that for another day.
  2. The average child was sent to school loaded down with candy and other garbage food. This was a problem for two reasons:
    1. My children were often trading and sneaking sweets, which we both feel are detrimental to their health. Please understand, we are not nazis about this, but feel that candy should, at most, be a special treat rather than a daily food.
    2. Rebecca has an allergy to peanuts and we noticed their were times foods that could contain peanuts were handed out (generally by children). This is not a stab at the teachers, as it is extremely hard to avoid peanuts (and other dangerous allergens) these days as they are found in foods you would never suspect.

When Rebecca left school (end of first grade), she was reading on a beginning third grade level, a year to a year and a half ahead of where she should have been. As of December, she was reading on a fifth grade level, nearly three years ahead of grade. To us, this is good evidence, albeit anecdotal, that our decision was correct. We see similar gains with Emily.

First signs of disagreement with prevailing thought

Over the past year, Rebecca has gained a huge fascination with space and I often find her [permanently] borrowing my Astronomy magazines. A few months ago, there was a cover story about the Big Bang. She said, "you know, there are some people who actually believed this happened." Yes, and I am one of them.

The evidence for a 13.7 billion year old universe that began as a fiery speck which "exploded" outward to "create" space and matter is overwhelming. I have yet to see a convincing argument that debunks the science of the Big Bang. When I say this, I mean that I have not seen a convincing argument of a young Universe, but I also mean I have not seen a convincing argument for a static Universe either. Both extremes are grasping at straws when one examines their arguments. And, each have a "religious dogma" motivating them to continue to argue their point.

Today, I got the MTHEA (Middle Tennessee Home Education Association) newsletter, Jonathan’s Arrow. At this year’s fair, Dr. Tommy Mitchell is going to speak in four sessions. For those not wanting to follow the link, Dr. Mitchell works for Answers in Genesis, Ken Ham’s group, who are the creator of the Creation Museum. He is teaching six sessions (you can skip over the italicized text if you do not want to read the entire crux of the talks):

Why Genesis Matters
This workshop explains why a literal acceptance of the book of Genesis from the pastor to the
pew is foundational for the modern church to fulfill its missions of winning souls and raising up
strong Christians.

Why Can’t A Day Mean A Day
This workshop exposes the danger of inserting millions of years into the Bible. Millions of years
require millions of compromises on the part of the Christian and the church, watering down the
Gospel message and hindering the ability to provide answers to those seeking the truth they can
trust.

Noah’s Ark and the Global Flood
This workshop answers the questions raised by the skeptical world and the skeptical Christian
whose views have been often shaped by unrealistic cartoons. Denial of the worldwide flood and
the real ark has overthrown the faith of many. The global flood, well-supported by science, is as
vital a symbol of God’s judgment as the ark of Noah is a symbol of His mercy in Christ.

Jurassic Park: A Dinosaur Tale
This workshop uses video clips from the movie “Jurassic Park” to show how the world
indoctrinates us with evolutionary theories and suppositions. This presentation shows how
dinosaurs really fit into history while debunking many of the popular fallacies about these
creatures.

After Their Kind
This workshop illustrates the harmony of the Bible with real science and the real world we live
in. Particular emphasis is given to an explanation of the created kinds and development of variety
in the animal world as well as the application of this information to the origin of differences in
people groups.

Are You Intimidated?
The workshop equips the Christian to be able to take a stand when confronted by things that
contradict the truth in Scripture. The well-equipped Christian should not be shamed into silently
surrendering the truth.

The "Evidence"

As an objective person, I have to accept that the evidence of a 13.7 billion year old Universe, a 4.65 billion year old earth is not 100%. There is a minute chance that the evidence is misinterpreted and the Universe, and all that is within it, are only thousands of years old, as "dictated" by a literal reading of Genesis 1 (Why can’t a day mean a day?). There is also a minute chance, if I take all of my savings and put it into lottery tickets, that I will be rich tomorrow. I would argue that there is a better chance gambling on Lotto than a young Universe, but that comes from my examination of the evidence.

First, scripturally, it states God is not the author of confusion. To me, it would be very confusing if the Universe was so young yet had so many signs pointing to an old age. I accept that this could be misinterpretation of the evidence, but much of the evidence came after the predictions. For example, George Smoot predicted that we would find "wrinkles" in the Cosmic Background Radiation if Inflation (a part of the Big Bang) occurred. This was an important "proof" that the theory was sound. So much so, that adherents to the static Universe view crowed when joint Japanese/US observations detected uniformity in the background radiation. It turned out that their method (sending rockets into the stratosphere) was flawed, as it was picking up "noise" from the atmosphere; a few years later (early 90s) the Cosmic Background Explorer (COBE) project found the "wrinkles". The results were further refined with the Wilkinson Microwave Anisotropy Probe (WMAP).

As for an old earth, we have dating methods. Objectively, I accept the fact that the assumption that no daughter elements existed early on is not 100% certain, but it is not dating alone that leads to this assumption (the "circular reasoning" counter argument). When we add the evidence from science other than geology, it is a very sound assumption. Too bad I cannot write a post long enough to cover all of the evidence. 🙂

Once again, taken objectively, one must note that both secular science and creation science are using the same types of assumptions to prove their points. For secular science, the assumption that there were no daughter elements and that radioactive decay occurs at a steady rate through time; for creation science, it is the assumption of atmospheric conditions being steady and starting with a condition of no "daughter" elements. As an aside, creation science’s criticism of steady rates appears to be valid on a certain level, as we are now seeing that conditions we once thought to be static (like the rate of expansion of the Universe) are not static at all, but I best leave that topic for later.

Evolution

The age of earth is an issue, but the real problem with creation science is the topic of evolution. Before tackling this sticky wicket, I want to state a few things. First, even if there is evolution, I do not believe it destroys the story in the bible. The bible, if taken holistically, presents a story of God creating spiritual man, man’s unwillingness/inability to live up to God’s righteousness and God’s plan for salvation. It details the balancing act between justice and fairness, on one side, and grace and mercy, on the other. Even if man were evolved from lower species (I will get there, just hold on), the spiritual breath of God could be the creation spoken about in Genesis 2. Second, even if the earth is billions of years old, it does not necessitate that all creatures evolved from a common ancestor without any intervention.

Evolution is fact! I agree with this statement, but not necessarily with the allusions and dogma that surround the statement. I see ample evidence that species have adapted. We have the breeding of dogs (and other domestic animals) to accentuate certain characteristics. We have the observable fact that man is taller today. We know that many conditions, both positive and negative, are genetically inherited. Based on these evidences, we can safely say that adaptation over time occurs and the strong are the ones that survive.

From this, we see the assumption that enough changes could create new species. And, depending on how you define species, there is evidence this assumption is true. If you look at Darwin’s finch observations, and consider each finch a different species, then you have evidence that the statement "evolution ‘creates’ new species" is correct. The problem is the word species has been so watered down that many believe that accepting that birds have evolved into birds means that reptiles HAVE evolved into birds, and so on. The evidence here is tenuous at best.

To learn, we all deal with analogies at some point in time. For those who believe that evolution explains all life, the metaphor for their analogy is based on similarities. Taking "Lamarckian" reasoning to the gene, they explain that the similarities in chimp and human DNA "prove" that we have a common ancestor.

I, as a computer programmer, see a different analogy here. I am currently writing a web site, a windows service and a desktop application (leading to a bit of schizoid behavior?). If I examine the entire code base, including the bits that make up the platform, my three applications have 90%+ identical code. If I further look for patterns in the code, I find the same patterns sprinkled throughout the other 90%. If I applied a "Lamarckian" analogy, I would have to state that the later code evolved from the earlier. The problem here is the only evolution present is the fact that my thought processes "evolved". Each of the pieces of code I have written, along with the code written to create the frameworks and platforms I code for, were purposefully designed. If I examine the code again, with a different analogy, I can plainly see the work of creation, not evolution.

If I apply the same computer programming analogy to DNA, I see a progression that works. The earlier code, that of the simplest life forms, is very simple. Much like the code of a "beginning" programmer, there is little or no error correction present and 100% of the code is exercised during the day to day operation of the program. As I move up the ladder, I see more and more complex operations. By the time I get to humans, we have a small amount of directly functional code with a large set of code generating code, some templates, pieces of code that dynamically run in the cells and even pieces of framework which only run in exceptional conditions.

Is my analogy flawed? Certainly. But all analogies are flawed, including the analogy of similarity proving a common ancestor. Yet many hinge their entire belief system on their analogy and refuse to examine any other way of looking at the evidence.

Back to the traditional school versus home school argument. In traditional school today, many of the "facts" presented about evolution are shaky, at best. There is also great resistance to teaching criticisms of evolution due to a slippery slope argument that letting in any criticism will lead to teaching that the earth is only 12,000 years old. While their certainly is a danger that allowing contrary thinking will lead to stupid thinking, there is the counter danger that not allowing valid criticism into the classroom leads to faulty beliefs and reasoning.

By not allowing valid criticism of evolution into the classroom and teaching known falsehoods as fact, evolution becomes a dogma. In addition, evolutionary teaching becomes more of a philosophy than a science. And, if we are teaching philosophy in our science classrooms, the argument that one should be taught creation in the classroom is also valid. Theology is a philosophy. If science is no longer science, why can’t competing philosophies be taught?

Please note I am not arguing that Genesis should be taught in biology classes. While I see nothing wrong with religious classes as electives, I do not subscribe to the belief that creation thought is a science. My belief is that creation is a supernatural event that cannot easily be "proven" through experimentation, as it falls outside of the realm of our view, locked, as we are, inside four dimensional time and space. As science deals with observation within the four dimensions, one can never use science as firm evidence of God or that there is no God.

I am arguing, however, that if we continue to parade philosophy in our science classrooms, we should allow philosophical dissent. I think the best way to do this is allow valid criticism of evolution and highlight the underlying unproven assumptions behind the science. I have read that this will only serve to confuse children, who are not ready to learn that science is based on levels of certainty and not absolute proof, but I say hogwash, as lying is the alternative in many, if not most, cases.

Now, to the home school side. Whether or not evolution is pure bunk (one extreme), I feel it is detrimental not to teach your children about evolution. First, it robs your children of an opportunity to a career in science as they enter college unprepared to pass most science classes. Second, they will encounter evolutionary thinking at some point in time and come into a crisis of belief. This will most likely be at a juncture where they are most susceptible to flip in a dangerous direction and the fact they are unprepared can cause a severe conflict in their belief structure. Third, one cannot argue against something they have no clue about. If you, like I, have noticed there are some "unproven", perhaps faulty, assumptions, then exposing them is better than putting a bag over your child’s head.

Summary

I am sold that home schooling works. Despite a great number of parents teaching creation in lieu of, instead of in addition to, evolution, home schooled children are doing better than the average school child. Some of this can be attributed to the demographics of the home schooled child, but not necessarily all. And, overall, we have been shown that home schooling opens up doors for children who are excelling that are often not present in the traditional school.

Even though I disagree with the young earth teachings, I find that most of the children schooled with this thought are normal and well adjusted. Thus, while I find the evidence pointing to an old earth and Universe, I cannot concur with those who state that a belief in creation, or the teaching of creation, is causing permanent damage to either these children … or society.

As I write, I am reminded that I am often attacked for my defense of Christianity or criticisms of evolution. But, I find I am also chastised by some Christians who find my belief in an old Universe to border on heresy (the "inserting billions of years into the bible" argument). Thus, I find myself "attacked" from both sides.

When I objectively examine Christianity, I find the Conservative viewpoint to better fit the evidence, but this does not mean that Genesis need to be literal. When I objectively examine evolution, I find that many of the core assumptions might be flawed, but this does not mean I should not continually read its findings. As Jesus stated "I am THE truth", I feel it is both an honor and a duty to continue to search for the truth, no matter where it takes me. So much so that if one can conclusively prove that God does not exist, I must drop my belief. Thus far, every "proof" I have seen rely on faulty logic and reasoning, but this is not surprising as science was never meant to be a philosophy.

Peace and Grace,
Greg