Fitna removed from Live Leak

Under threats from radicals, the site has removed the film Fitna. You can read there statement by clicking this link. Google still has the film posted here, although Tech Crunch is stating that Google might be under threat themselves for allowing the film to be aired.

The film is largely a screed against Islam, by Gert Wilders, an anti-immigration politician in Belgium. In the film, he uses images of 9/11, etc., juxtaposed with surahs from the Koran. His message is that Islam teaches violence and wants to subjugate the entire world.

Gert’s message is a caricature of the complexity that is the Muslim world. As with other caricatures, it is partially true, especially when one views some members of the community, but is also partially false.

The saddest thing is the threats against, most likely by radical members of the Islamic community, add fuel to the fire for those who wish to produce such epithets. It is essentially saying, "we are peaceful, and if you say we are violent, we will kill you." It is an unnecessary fuel added to an already growing fire.

Being objective, Fitna does expose a raw side of Islam that exists. We cannot sweep it under the rug and ignore it. Also objective, it is not the only side of Islam. It is for this reason we must fight both against fundamentalist Islam, which aims to destroy those who speak against it AND caricatures which paint all muslims as fundamentalists. But we must wage this battle with words, not threats of violence.

Free speech is designed to allow voices we disagree with to have their say. This is why it has protected pornography, the printing of plans to the atomic bomb, and even sharp criticism of various ideologies, religious or otherwise. Even those who are raving mad should have the right to have their voice heard. When violence, or threats of violence, silence dissenting voices, even those we strongly disagree with, it is a dark day for freedom. I may not stand with Gert Wilders on his message, but I certainly stand behind his right to be heard. I do not denigrate Live Leak for capitulating to the threats against them, but I am glad other venues have stood up for the freedom of speech. Let’s pray the radicals do not follow through with their threats of violence as it will be a set back for the very religion they feel they are protecting and will fuel the fire for further screeds against them.

Peace and Grace,

Adding Items to the Visual Studio Toolbox

As most of you already know, I am currently running the Visual Studio 2008 beta. I know it says it is a release product, but occasionally I find that that expectation is unrealistic, so I will call it beta and be content. 🙂

My current issue is with toolbox. I cannot add custom items to the toolbox. I have cleared temporary files, used devenv /setup and devenv /resetskippkgs and even reset the toolbox. This shows my desperation? Still the IDE crashes.

Finally, after dinking around a bit, an inspiration came to me. Something is loading in the toolbox that is messing things up. If this is true, then the proper way to clear things out, when reset does not work is to use safe mode. So, here is what I have done to solve this problem.

  1. Open a Visual Studio command prompt
  2. Type in devenv /safemode
  3. Right click on the toolbox and choose the option Choose Items
  4. Click each tab. Somewhere in the process, you will notice an exception loading one of the controls
  5. Restart Visual Studio normally
  6. Add controls

Suddenly, I am in coding "heaven" again. Until next time …

Peace and Grace,

The Great Debaters

I watched the film "The Great Debaters" yesterday. Lovely film. It tells the story of the Melvin B. Tolson and his debate team at Wiley College. The team is best known for debating the national champion Harvard debate team. I won’t give any spoilers, in case you do not know the entire history.

After watching the film, I did a bit of research and found that this "drama based on the true story of Melvin B. Tolson" had a lot less "true" than story.

Wiley college did have a debate team. It was run by Melvin B. Tolson. James Farmer Jr, who later became a famous civil rights activist, was on the team. And, they did beat a major white college debate team, although there is no evidence that it was Harvard. They also were threatened with lynching at one point in time. Beyond that, the majority of the story takes the years 1923 – 1939 and smashes events from different years into a one year debate journey to beat Harvard, the national debate champions.

The first problem is there is no historical evidence that Wiley ever beat Harvard. They did beat numerous "black colleges" of the time and were so good at debating that they were finding hard to find "black colleges" to debate. The big debate of 1935 was a debate with the University of South California (USC) debate team, who were the national champion. Wiley did not win the national trophy, however, as "black debate teams" were not recognized until after World War II. Also, Farmer was not placed in the debate that night; he was an observer.  And Hamilton Boswell, who, as Hamilton Burgess in the film, quits the team over Tolson’s "extracurricular activities", was a high school graduate in the audience that night, not a former member of the team.

As for Samantha Booke, she never existed. She is modeled after Henrietta Bell Wells, who was the first female debater on the team. She died on Feburary 27, 2008 in Baytown, Texas at the age of 96. Wells was on the 1930 debate team, so she never had the opportunity to debate USC or stand opposite Farmer in a debate.

Henry Lowe is a composite character, but seems to fit the character of Henry Heights, who apparently did have a problem with drinking and womanizing. It is extremely unlikely he was ever romantically involved with Bell (Booke). He was the anchor man on the night they beat USC. Furthermore, Henry Heights did not become a minister; that was Hamilton Boswell.

While Tolson did have "leftist" leanings, there is no evidence he was arrested for trying to start a sharecroppers union. The evidence also points to the fact that "black colleges", not "white colleges", did not want to debate Wiley, as they were too good; there is no evidence of blacklisting from "white colleges" due to an arrest. This also means that Tolson would not have been on parole while his team debated Harvard, er, USC (in fact, he was there with his team and told them to stay in their dorm rooms so they would not be intimidated by the size of the USC speech department).

The lynching described in the film did happen, just not as shown. They were warned of a lynching in progress in Carthage. Initially, they decided to skirt the town, but eventually went in with Boswell driving (his skin color was lighter). The rest of the team, including Tolson, stayed down and never encountered the mob as portrayed in the film. We should also note that it is unlikely that Tolson recited the story of Willie Lynch as there is no evidence that a Willie Lynch letter ever existed prior to its mention by Farrakhan in the Million Man March, nor is there any evidence the act was named after a slave owner. It is pure fantasy.

The term lynching comes from Charles Lynch, a Virginia Justice of the Peace. Lynch’s law was instituted around thetime of the American Revolution and was designed to punish Tories, or colonists loyal to the British Crown. This does not deny that lynching was, in fact, used on African Americans in the south, just that the story is designed more for its emotional appeal than its truthfulness.

What about my feelings about the film? On an emotional level, the film is very satisfying. It tells a good story and gives some perspectives on the realities of African Americans before the Civil Rights movement. It is sad seeing how people were treated simply because of the melanin content of their skin.

At times, I feel the film is a bit too preachy. There are numerous modern allusions in the film and they are not well blended into the characters (or character, as it is primarily Tolson who preaches). At times I was jarred out of the alternative reality that film presents by moments that sounded like modern day political drivel.

The sad thing is the debate team story is a great story without altering it. If the producers would have stuck to the facts, they would have still ended up with a great movie. This point is mentioned in Eleanor Boswell-Raines article ‘The Great Debaters: Why Wasn’t History Good Enough?‘.

Peace and Grace,

SPAMs and Scams

>>>Edited due to threat of lawsuit.

Posted under the title "Web 2.0 Secrets Revealed".

Almost everybody seems to be talking about Web 2.0 these days. Never
mind the fact that many people throw around the term without knowing
too much about it.
Do you do that too? Don’t worry. You’re not alone. But it’s time you
explored what it was.  Free information available here

This then takes you to a page that has a video, designed to hook you in to giving up your email address. After giving up your email, you get to a page that shows some info about the product, which is really just a hook to get you to click to another site.

How the Scheme Works

Person 1 creates a product, or buys rights to a product, he wishes to sell. He puts the product up on click bank, a site dedicated to people peddling crap. Now there are multiple ways it can be sold from click bank.

Person A will set up a blog site with enough content to get hits, especially from search engines that are not up on this scam yet. He will have the site up for a few days, posting content he either writes or pays for. It is interesting enough content to rise in rankings on some search engines. Then, person A writes a review for the product he is hawking, a product for which he will earn a 50%+ commission on each sale, minus commissions paid to click bank. Eventually he will get tired of this racket and sell a book detailing how he made a few dollars writing blogs, although the reality is he will not make much of anything. He will make more off the book than he ever did off of the work blogging.

Person B is a company like affiliate silver bullet. This company makes video squeeze pages (their term, not mine), like the one advertised in the SPAM. You then use them to set up the squeeze page and "keyword rich" mini site and pay $37/month to keep the two pages active. Considering you can build one of your own for less than $10 a month?

Person C is the moron who actually SPAMmed the newsgroup. He bought from Affialiate Silver Bullet to sell this product.

Now, let’s look at the product. It looks pretty nice. Even has two case studies:

  1. Case study for Monthly visits jumped from 638 to 2774 in one month, supposedly by using this product (which sells for $97, but you can get it today for $47). Looking at the adult stem cell site, it is a wordpress blog page (more on this in a second).
  2. Case study for Same basic story. Only this site does not create its own content as it states the articles are from Friends HomeHealth Care Guide.

At least one of these sites appears to be a site designed to generate adsense revenue. I am not sure about the other as I have not delved far enough to find if it has a "review" of a product he is touting. Either way, both sites appear a bit shady.

Digging Further

Using WHOIS, we find that the web2submitter (click for whois info) site is owned by a Brandon Hall from Johnson City, Tennessee. It has been up for one year. He is listed on as a sale of $21.22 with a commission of 50% (the prices have changed, which means he is either trying to screw Click Bank, the seller, or Click Bank is not up to date). The software was probably purchased by Brandon for the purpose of setting up these garbage Internet pages. is owned by Wild West Domains, Inc., who has it registered to a Matt Canham of, a site that, amongst other things, has info on how to make money on the internet using viral marketing and blog marketing. So, that pretty much confirms his site is part of the Internet selling machine that breeds these bogus sites like crap breeds flies. is owned by Rowell Bulan, the editor of, a site that also contains a wide variety of links to money making ideas, although these all are just Lorem ipsum, so I would assume he is aiming at Google Adsense. A search shows that Rowell Bulan is a online marketer.

So, you market with two case studies from two marketers? But, there are enough fish that do not investigate (perhaps do not even know what WHOIS is).

Affiliate Silver Bullet is owned by [another Internet marketer. Due to threats of a lawsuit for libel, I am removing the info (you can look it up). I feel this is fair, as people using your product to aid in their nefarious schemes to get wealthy selling crap does not make you guilty. The majority of these types of Internet promotion applications are generally used by people who are not above board, so I will let you make up your own mind].

I still am under the opinion that the entire business of Internet marketing to create marketing, building blogs to bring traffic to your marketing site (ie, no real value), cyber squatting, and the like, are bad practices that degrade the value of the web.

What about our original SPAM site? It is owned by Frank Pasinski of Chester, Cheshire, GB, who, according to this site, is also trying to sell how to make money on the Internet. His SPAM email address is, but you can find his real email here by doing a search on

Want to make money on the Internet for Free

Find a product. Then, start a blog on that topic. Be sure to post a few blog entries. Then, start other blogs pointing to blog one. Sure, this is unethical as all hell, but you are just trying to get ranked high, blotting out any really useful information people like you are normally trying to find on the Internet with your crap. Then, you post a review for the product, as an "unbiased" third party.

To make sure you really make money, you need to add some Google Ad Sense, of course, as well as dozens of other ads that would keep SMART people away. But, you are trying to hook dumb people, so why should you care.

After all, why should it matter you are defecating all over the WWW and turning the Internet into a sewer as long as YOU are making bucks.

Next step. You find your site is really not generating the millions the book you bought promised, so you decide to peddle a book of your own. You use the same blogs, email and usenet SPAM, but you also seed peer to peer networks with PDFs that are a copy of your hook site.

In the end, you still don’t make a huge amount of money, although you are advertising a picture of a Ferrrari you found via a Google image search. But, what does it really hurt, as long as YOU are making money?

You probably think that this system is too stupid to actually work, but if you search enough you will find this is precisely what people are hawking. But, perhaps you believe it is really going to work for YOU. Kewl, but I also suggest you give out your bank account to someone in Uganda to guarantee your wealth.

How to really make money

This one is simple. It is called GET A JOB. Yeah, I know it is not as fun as spending your time online trying to bilk others out of their hard earned cash for pure garbage, but it is one of the surest ways to actually … get this … MAKE MONEY.

Peace and Grace,

Auto Obfuscation on Build

This should actually be two blog posts. The first on obfuscation (and whether or not it works) and another on command line arguments for build.  But, I am being lazy. 😉


I got into a discussion in the Microsoft groups the other day about obfuscation. The OP (Original Poster) stated:

I’ve got some code contained in a component in a dll that I want to protect
against reverse engineering – but I’ve never done anything like that before,
so I’m wondering a few things:
1) What brand of obfuscation software provides best protection at a
reasonable cost? Have you got any recommendations?
2) If I obfuscate the dll containing the component, will it still be
possible to add the component to the VS toolbox and use it in projects that
are not obfuscated or perhaps obfuscated using a different tool?
3) Does addind the "DebuggerHidden" attribute to the critical code sections
protect anything at all?`

One of the posters responded:

The only way to stop people reverse engineering your assemblies is not to
let them have them.  Put the critical stuff on a webservice or something.
If this is not possible then just accept that it is entirely possible to
read your source code using something like Reflector.

For the record, there are good and bad obfuscators out there. True, you cannot completely protect your code from a serious hacker, but you can make it very hard. True, the built in obfuscator will not stop anyone from using Reflector and getting your code, as it is just a symbol renaming tool. There are, however, some good obfuscators.

One I particularly like is CodeVeil, although what I describe here can be done with any of the obfuscators on the market that has a command line version of the tool. I think that pretty much describes them all, although I could be wrong.


In Visual Studio, go to the properties for your project. One easy way is via the right click menu in Solution Explorer.


Click on the Build Events tab


and add the following to your post-build event command line box:

"C:Program FilesXHEOCodeVeilv1.0cve.exe" /ox+ /er+ /es+ /er+ $(TargetPath)

When you run this you should see the following on your output for your build (I have obfuscated out the licensing information for CodeVeil (the ####s):

Compile complete — 0 errors, 0 warnings
MyCompany.ClassLibrary -> C:projectstestMyCompany.ClassLibrarybinDebugMyCompany.ClassLibrary.dll
"C:Program FilesXHEOCodeVeilv1.0cve.exe" /ox+ /er+ /es+ /er+ C:projectstestMyCompany.ClassLibrarybinDebugMyCompany.ClassLibrary.dll
copy C:projectstestMyCompany.ClassLibrarybinDebug\VeiledMyCompany.ClassLibrary.dll C:projectstestMyCompany.ClassLibrarybinDebugMyCompany.ClassLibrary.dll
rmdir /s /q C:projectstestMyCompany.ClassLibrarybinDebug\Veiled

XHEO|CodeVeil Assembly Encoder v1.0
Copyright (C) 2002-2006 XHEO INC. All rights reserved.

Licensed to:

Processing C:projectstestMyCompany.ClassLibrarybinDebugMyCompany.ClassLibrary.dll.
Resolving Rules..
Saving modified assembly.
        1 file(s) copied.
========== Rebuild All: 1 succeeded, 0 failed, 0 skipped ==========

I am currently in the process of trying to get rid of the Veiled directory:

"C:Program FilesXHEOCodeVeilv1.0cve.exe" /ox+ /er+ /es+ /er+ $(TargetPath)
copy $(TargetDir)Veiled$(TargetFileName) $(TargetPath)
rmdir /s /q $(TargetDir)Veiled

But I am having a hit or miss time. Perhaps adding a little time for the compiler to let lose of the target file is in order?

Does the obfuscator actually prevent decompilation?

This is a question on most people’s minds at this point in time. So, I am going to take a project from work and open in Lutz Roeder’s Reflector and explore the file. In this case, I am looking at a Response object. The GIF quality is not great, but much smaller:


Notice that there is heavy symbol renaming, and when I attempt to reverse engineer the code, I end up with the following:


What about ILDASM:


Double clicking on this method ends up with the following IL:

.method private hidebysig static valuetype Microtrak.Kore.Enumerations.SmsRatePlan
        a(string b) cil managed
  // Code size       59 (0x3b)
  .maxstack  8
  IL_0000:  ldarga.s   148 // ERROR: invalid arg index (>=2)
  IL_0002:  br.s       IL_0063
  IL_0004:  unused
  IL_0005:  sub
  IL_0006:  unused
  IL_0007:  isinst      [ERROR: INVALID TOKEN 0x1F3A95CA]
  IL_000c:  stelem.ref
  IL_000d:  unused
  IL_000e:  unused
  IL_000f:  shr
// Error: no section header for RVA 0x18f1f, defaulting to empty string
  IL_0010:  ldstr     
  IL_0015:  nop
  IL_0016:  nop
  IL_0017:  unused
  IL_0018:  ldarga.s   148 // ERROR: invalid arg index (>=2)
  IL_001a:  br.s       IL_007b
  IL_001c:  unused
  IL_001d:  sub
  IL_001e:  unused
  IL_001f:  castclass   [ERROR: INVALID TOKEN 0x1F3A95CA]
  IL_0024:  stelem.ref
  IL_0025:  unused
  IL_0026:  ldelem.r8
  IL_0027:  stloc.s    V_18
  IL_0029:  ldind.i
  IL_002a:  ldc.i4.5
  IL_002b:  ldelem      [ERROR: INVALID TOKEN 0x05EA1D87]
  IL_0030:  ldc.i4.7
  IL_0031:  unused
  IL_0032:  conv.ovf.u8.un
  IL_0033:  break
  IL_0034:  ldc.i4.7
  IL_0035:  ldind.u2
  IL_0036:  calli      0x0A1D8723 // ERROR: invalid token type
} // end of method Response::a

Notice the errors in the IL. These do not prevent the code from running. All of my tests are fine:


What about the RemoteSoft Salamander decompiler. It is, by their own press, the best on the market. How does the obfuscated assembly fare?


Once again, we won.

Realize that a very committed individual will eventually get your code. It will take quite a bit of work, however. Two things to make this even harder:

  1. Make your public members stubs (see point 2)
  2. Move the code that contains your algorithms into private methods. Make sure to refactor the code from public members here.
  3. Encrypt strings, resources and blobs whenever possible before obfuscating

Hope you had fun with this one.

Peace and Grace,

Issue with ASP.NET Temporary Files

Not sure this is a bug, per se, but it certainly is something that can really chomp down on your hind quarters.

The Problem

Update a site where libraries have shifted. At the point you update and restart, the new files should take over and be JITted, yada yada. The problem is that sometimes the ASP.NET Temporary files are not crushed to death like the worms they are and you end up with errors that you cannot figure out. "But I just deleted all dependencies to that library".

Okay, a story is in order as I can see I am going to confuse the hell out of my readers without context. At work, we have been working with a vendor for a few years. They have a back end product that is essential to our operation. And, while we can see room for improvement, nobody else on the market is doing it much better, if at all. Until we find time to develop our own, we have to go with somebody’s solution.

Forward to the past few days. We have to update the back end. Normally not an issue, but we find implicit coupling between their back end and the sites (which were also, in part, developed by the vendor — I blogged generically about this here and here). The coupling is largely there due to a custom serialization piece, which, as an artifact, requires not only that I send an object with the right properties (in serialization), but that I am on the same libraries. Before anyone thinks to start pointing fingers, I must state that I have seen this particular programming technique on three different Enterprise level projects, with some advanced programmers (as with this vendor), so cockiness is not warranted. Fixing the problem is.

This issue is compounded by the fact the vendor is restructuring libraries. It is partially a refactoring move, as migrating classes into better named assemblies makes things clearer. But, it breaks all public interfaces. Not a problem with the backend, but a serious problem for sites that must adhere to the new libs.

Okay, problem covered.

The Bug and a Solution

I used the evil B word here, but I truly do believe this is a bug. It is a minor bug, overall (although I will show another case where I think it reared its ugly head a bit later), as most of us are in better control of our world and not going through major name and namespace changes in our class libs. This makes the testing of this "bug" a fringe case, which are often missed, so I am not potshotting Microsoft on this one.

The first symptom I had to show the bug was this:

Could not load file or assembly ‘#######.BusinessObjects, Version=1.1.2780.26883, Culture=neutral, PublicKeyToken=null’ or one of its dependencies. The system cannot find the file specified.

This error happens when a DLL is missing from the /bin folder. Normally, the solution is to add the DLL and roll. The problem is this particular library was GONE. It was no longer a part of the site, as it had been refactored out by the vendor.

Solution 1: Rebuild the website definition in IIS. Then hook to, where the old site is This works, as the site is JITted off of the new directory in IIS, making a whole new Temporary ASP.NET directory for the JITted assemblies. Kewl.

Later on (about 10:30 PM, or so), I decided to short circuit and remove the testing of the new directory before flipping host headers in IIS. This yielded a new surprise. The site was now messing up due to finding the compiled class in two locations … yes, another Temporary ASP.NET fubar. Since I did not test in the directory forcing JIT, solution 1 would not work.

Solution 2: Turn off local IIS and shut down Visual Studio. Clear out Temporary ASP.NET files locally. Open Visual Studio and Publish site locally. The above is just a safety measure and I do not think it actually helped anything. If I get a chance to repro the "bug" to determine exact cause, I will test with and without these "pre-steps". I then moved the files to the new server and set up in a directory. I created a new IIS website definition and pointed to www2, leaving the original pointed to www. I then shut down IIS and cleared temporary ASP.NET files. While still shut down, I switched the host headers and then started IIS back up (actually the WWW publishing service, if you want to be technical).

Hindsight being 20-20, I realize that properly JIT compiling the new site would probably work and explains why solution 1 worked the first time and not the second. I also suspect that I could stop IIS and clear out Temporary ASP.NET files while deploying and end up with the same happy ending.


I have not fully confirmed this, but behavior suggests that ASP.NET JIT does not follow the site naming directly. What I mean here is each website in IIS has a particular log directory, which is tied to the website definition. In ASP.NET Temporary files, it appears that the name of the site and/or host header name enter into the equation. Seeing how the files are organized, I think I am on the right track.

Now, I thought great. I found a "bug", but most of the world will never see it, as most of us are not shifting names and namespaces. Major bug in impact, when it hits, minor in the number of people it will hit (almost nobody). But, it appears I might be wrong. Here is a post in microsoft.public.dotnet.framework.aspnet (subject = .NET Framework 2.0 SP1 causes application to fail on references):

I have a web application deployed in a Production environment, where
it has happily been running for the past 2 years on the .NET 2.0
framework.  However, the operations team recently installed the .NET
2.0 Framework SP 1 on the server, and immediately certain sections of
the web site began failing with the errors "Could not load file or
assembly ‘blah’ or one of its dependencies.  The system cannot find
the file specified".

I am wondering if others have experienced this issue and/or know why
the installation of the patch has started causing my application to
have problems.  When we uninstalled the patch the site began working
as normal again – is my web.config file misconfigured with references
I don’t need and may not exist on the server (I didn’t add these
particular references, the framework did when I compiled), and the
patch now causes the framework to behave differently in validating

This sounds precisely like the problem I was having. I will follow up when the OP gets back (either email or in the newsgroups) and write a KB article if this is, in fact, an issue. His may, in fact, be a completely different issue.

Peace and Grace,

MVC and Testing. A New(?) Idea

Right now, a lot of people are extremely jazzed about Microsoft MVC. While I am glad to see that Microsoft is creating a web application model with good separation of concerns, I am not doing flips over the project. This does not mean I do not like it, just that it is not as big of a deal as the buzz makes it out to be. Okay, considering how MOST people build web forms, it probably is a big deal.

Why MVC? In all reality, the problem domain we are entering is one of ignorance. The ignorance is bred by the plethora of examples that build a UI similar to this one.

<%@ page language="C#" %>

<script runat="server">

  void Button1_Click(object sender, EventArgs e)
    string resultOfWork;

    //lots of lines of code to do the work by using a value from
    // the textbox

    Label1.Text = resultOfWork;

    <title>ASP.NET Inline Pages</title>
    <form id="Form1" runat="server">
      <h1>Welcome to ASP.NET 2.0!</h1>
      <b>Enter Your Name:</b>
      <asp:TextBox ID="TextBox1" Runat="server"/>
      <asp:Button ID="Button1" Text="Click Me"
OnClick="Button1_Click" Runat="server"/>
      <br />
      <br />
      <asp:Label ID="Label1" Text="Hello" Runat="server" />

Before I begin to sound like a major rant, understand that there is nothing inherently wrong with the example above … AS A LEARNING EXERCISE. The problem is too many people do not understand that they should not code production applications this way. Let’s dig a bit deeper.

The Problem

The real problem is how applications are designed. We start from the User Interface and then work back. Instead, we need to consider the design of the different layers as if they are separated applications (or better yet services) that are working in tandem. When we view our applications as a set of services, we design each service to do what it does in a way that is complementary to what it needs to do. As this is a single app, at least for now, we do have to consider how the different layers talk to each other, but we do not have to design linearly from UI to back end or vise versa.

Once you break free from this mold, you start to think of the business tier as a set of behavior objects and state objects that guarantee a user plays within the constraints of our business rules. You then start modeling the behavior in your business tier behavior objects rather than writing lengthy event handlers in code behind.

If you have read much, you are probably thinking domain modeling. And, this is precisely what we are doing. But, you do not have to let the domain drive your UI and database design. That is the shortcoming of the way most Domain Model Architects design their applications. They get so overly concerned with the domain that they mold the UI to the domain. Okay, getting on a tangent and close to a rant here.

If you think about it, you have completely different concerns when you are building UI, business objects (domain objects) and the data layer and physical data storage. Let’s look at a few things.


  • The UI has to flow naturally for users and that should be your primary concern. Thinking about how it communicates to the "domain" leads you to compromise form for function, when form is what users need. Conversely, if I start letting the UI drive the decisions, I am more apt to place a lot of code in my code behind. Neither of these are particularly appealing, but both are very common.
  • The domain needs to be concerned with the rules of my business. If I let business rules drive my UI, however, I will end up taking away some of the natural flow to fit processes rather than tasks. Also not good. In addition, I may start modeling my database as my objects rather than store information efficiently and in a manner that performs well.
  • The database needs to be concerned with efficient and performant storage. And, there is the serious concern of data integrity. But, if I build my business tier around my database, I end up sacrificing the focus on the "domain". Also not good.

The Solution?

By Microsoft’s blogging, one of the biggest "problems" solved by MVC, however, is testability. And, I agree, but it is not really as big as it sounds, depending on how you design your applications.

Prove it? Let’s go back to our original example and alter it:

<%@ page language="C#" %>

<script runat="server">

  void Button1_Click(object sender, EventArgs e)
    BusinessTierObject businessObject = new BusinessTierObject();
    string resultOfWork = businessObject.DoLotsOfWork(TextBox1.Text);

    Label1.Text = resultOfWork;

    <title>ASP.NET Inline Pages</title>
    <form id="Form1" runat="server">
      <h1>Welcome to ASP.NET 2.0!</h1>
      <b>Enter Your Name:</b>
      <asp:TextBox ID="TextBox1" Runat="server"/>
asp:Button ID="Button1" Text="Click Me"
OnClick="Button1_Click" Runat="server"/>
      <br />
      <br />
      <asp:Label ID="Label1" Text="Hello" Runat="server" />

And, let’s stub in an object.

public class BusinessTierObject
    public string DoLotsOfWork(string input)
        string resultsOfWork;
        //Do lots of work here
        return resultsOfWork;



At first blush, this does change things much, but I now can do this:

public void TestLotsOfWorkMethod()
    string passedInValue = "{something here}";
    string expected = "Results of Work";

    BusinessTierObject target= new BusinessTierObject();
    string actual = target.LotsOfWork();

    Assert.AreEqual(expected, actual, "Work values are different");

Please don’t get caught up in the test method. I just made it up. Instead, notice that I have effectively tested all of the logic from the button submit method in a way that is repeatable. And, if I find a bug with a certain input value (easy with a textbox), I can confirm the bug on the library and stomp it out.

At this point in time, the main benefit of MVC over my model is MVC allows me to test a bit closer to my UI, as in this example where I add a controller "double" that derives from my default controller.

private class ControllerDouble : DefaultConroller
  public ControllerDouble()  { }

  public string SelectedView { get; private set; }
  public object RenderedViewData { get; private set; }
  protected override void RenderView(string viewName
    , string masterName
    , object viewData)
    this.SelectedView = viewName;
    //I don’t care about masterName at this point.
    this.RenderedViewData = viewData;

Don’t get me wrong. I think the more you can test, the better. And knowing that you are rendering the proper view is a great thing, as well as seeing the view data. But, in your average application, is testing this much closer to the UI really adding much? Stop. Take that back. In your application designed where the UI is a UI and not a be all, end all mass of code behind spaghetti, is testing this much closer to the UI really adding much?

I am currently playing with the MVC Framework. I think it could be a really neat model that can serve the masses nicely. But I also see that much of its wow factor is based on the fact that a great number of programmers are building web applications incorrectly and the MVC Framework is a tool that forces them out of the ignorant/stupid mold. If it can accomplish that, I give it kudos, but there are some hidden costs along the way that Microsoft will have to work through.

First, the model is currently set to only work REALLY well with the latest IIS. As it is not available for XP, some will be left adding some extra code bits to test locally. Fortunately, this is a minor config change and may be ironed out prior to RTM, but it is currently a small hassle.

Second, there is a learning curve associated with MVC. While there are plenty of examples (I even blogged one – pat! pat!), the curve gets steeper as you move deeper outside of the simple view examples.

Third, you end up writing an awful lot of code that Microsoft helped you with when you were not MVCing. This will change over time as Microsoft, and third party providers, give you more things to help with your MVC work, including more templates and possibly some recipes. 🙂

Finally, there is some relearning to do many of the things you now know how to do in ASP.NET. This is closely related to the third item, but there are some subtle differences. Fortunately, a lot of people are blogging on how to do things in MVC, including more TDD focus, including Membership providers, etc.


Do I think MVC is great? Yes, overall. Am I going to use it in my applications? I am currently deciding that right now. As much of our work, at least the newer work, is already in libs, moving to MVC is not that a huge step for those apps. With some others, it may take some doing. As Scott has stated we can Go Live with the Mix preview, I am thinking about it, but I fear being burned by the Microsoft beta bug. Whether I go with MVC or not, I am already writing fairly testable applications that have repeatable tests, so it is not going to offer me the silver bullet it may offer others.

Peace and Grace,

Contract versus Coupling

Alan Stevens (@alanstevens for use Twittering Twidiots — had to use that word in a sentence at least once today) spurred some further thoughts on coupling and I just HAD to share.

A few years back, I was mentoring a colleague on SOA concepts. We got into the idea of serializing objects and what is a good idea and what is not. The app we were working on serialized objects of a certain type, but needed to reconstitute as a separate type. As the objects contained behavior (a pet peeve of mine), it was a sticky, stinky mess.

That led to a discussion about whether or not I felt serialization of objects is a bad idea, as it would necessarily couple the two applications. No, I stated, it need not couple.

Coupling is evident when you have to update libraries on both the front end and the back end, or the client and the server, if you like those terms better. If you update to version 2.0 on the back end and I can continue using 1.0 objects, then the coupling is still pretty loose, at least as far as the communication at hand. The application I am working on now required a complete revamp of the client (web app) due to a change in the back end libraries (they share the same libraries and the objects contain behavior, including communication hooks — ouch!!!).

Well, Alan mentions that many advanced devs do this. Yes, the vendor that wrote the back end has some very advanced, very smart people, so I can concur. And the app from a few years ago had a very smart guy as their "architect" before I was onboard. I have also cleaned up after some real first rate morons, as well, but that is something I will save for a rant post. 🙂

Back to the subject at hand. Talking from one application to another requires that you agree on how to communicate. You are not coupled if you agree that you send two integers to me and I will return you a single integer or a double or even a string. The same is true if we agree on more complex objects than simple data types (although strings are not really all that simple, are they?). This is the world of contract. You pay me $X and I will program for you for an hour. That is the essence of a contract.

Now, if I state, you send me an X object, using the 2.0.3456.364 library, and that gets into coupling, as you are not only stating the information I have to give you, but the exact method I have to use to get it to you. And, that locks me into your back end, as nobody else has access to your types.

Let’s call this rule #1: If you have to agree to particular versions of libraries, not just object forms, you are tightly coupled.

Rule #2 then would be: If you have to talk to a particular application and cannot move to a competitors version of the application, you are tightly coupled.

Rule #3: If you cannot move from one type of data store to another, you are tightly coupled.

Ooh, and here is a tricky one: If you place your business logic into you UI … oh, wait, maybe that is you are ignorant rather than tightly coupled. ;-> (Before getting too pissed off, consider that the word ignorant is better than the word stupid. Ignorance is curable, stupidity is not).

If you have to update libraries, you most likely have a versioning issue, as well. You know, simple things like breaking the rule "NEVER change a public interface".

Peace and Grace,

Diskeeper 2008

I have been using Diskeeper for quite some time now. I got an upgrade to the 2008 version in December and I have not taken the time to write about it. When something works without you having to kick it in the side, you tend to forget about it.

I am only thinking about it now as I am currently clogged up again and I am going to run a boot time defrag. While most of the time, I do not have to manually intervene, as Diskeeper does a great job of keeping my machine in good shape, there are times when I do. Generally it is because of my own obsession with packratting new technology. 🙂

If you are not familiar with Diskeeper, or other defrag tools, you might wonder why you should have one instead of the built in defrag with windows. There are a quite a few reasons I can think of:

  1. Built in defrag is rather weak
  2. Built in defrag can not selectively defrag for best performance (ignoring certain files that are not accessed often, etc.)
  3. Built in defrag gives you very little information about what it is doing
  4. Built in defrag cannot defrag Paging files or your Master File Table (MFT)
  5. Built in defrag will not automatically defrag in the background. This, to me, is well worth the price of admission
  6. Built in defrag will not tell you the shape of your MFT and let you easily expand it. Sure you can find this out via the command line.
  7. Built in defrag cannot monitor your Paging files and MFT and keep it fast
  8. Built in defrag has no updates, except perhaps security patches

I love fire and forget programs, especially utility programs. I also love to dink with them when I notice issues. If Diskeeper could also clear out useless temp files and temporary internet files while it defragged, I would love it even more. Just my two cents.

Yeah, it is kind of geeky praising a utility.

Peace and Grace,

Tight coupling and other programmer tricks

I am writing something today that is designed to be a constructive article. It deals with refactoring solutions to get rid of coupling. The problem is one I am dealing with on an application co-written by our company and a vendor. And, due to moving parts, is a pain point right now. The article is written for two reasons:

  1. Release some of my frustrations, so you can understand why tight coupling is a bad thing
  2. Constructively look at solutions to the problem domain so we can all get better at what we do

I write this risking sounding like a rant, which is generally a useless exercise. This is not aimed at the issue at hand, in particular. I want that to be clear, as I am not pointing fingers. That is also why I am not naming names, but just getting to the core of the issue.

Problem 1: Tight Coupling

This is the third application I have worked on where a back end solution has been coupled to a front end solution through object libraries. The idea is simple. You start with a back end that has a set of libraries. Rather than serialize to different types, you feel you should share libraries. To communicate between projects, you serialize objects directly from front end to back end with some form of communication code set up in one of the libraries.

The main problem with this set up is you end up coupling the two projects without an explicit coupling. Let me explain.

When you hear the term "tight coupling" it is normally through some hard coded connection between layers (or tiers) of you application. The most common is hard coded bits. Let’s take, for example, the most tightly coupled thing I can think of off hand: the SqlConnection.

string connectionString = "{My connection string}";
SqlConnection connection = new SqlConnection(connectionString);

In that example, you are not only coupled to SQL Server, but you are coupled to a particular instance of SQL Server. Yuck!

So, you move your connection string to the config file, and end up with something like this:

string connectionString =

You are now decoupled from a particular instance, but your solution is coupled to SQL Server. Now, this might be acceptable in some instances, as your employer may be dead set against any other type of database. In these cases, the solution is acceptable, even if it is not ideal. You can further decouple the solution in a couple of ways. First is through interfaces and a factory method. Please note that the following is "on the fly" pseudocode and not something you can boilerplate:

string connectionString = …
DbType connectionType = …
IConnection connection = ConnectionFactory.GetConnection(connectionString, connectionType);

The factory will then use the connectionType to return a Connection object from the correct type of database. It will be returned as IConnection, an interface, and you will run the objects methods via this interface and not directly to the object methods. What this means is you cannot run extra, database specific methods, without casting (and coupling a bit).

As an example, suppose the Microsoft .NET team created a new XML method that works on SQL Server. As it is only available in SQL Server, they do not have it coded on the interface (this is a stretch, but something that can easily happen in your own objects). So we have something like so:

public sealed class SqlConnection
    //Part of IConnection
    public int ExecuteNonQuery();
    //Method only found in SqlConnection
    public DataSet RunXML(string xml);

Ignore that the above would never happen, as it is bad form. Focus, instead, on the fact it could happen if you are coding your own classes. Think high level, as well, rather than dwelling on the specific method names.

Here is what happens in your code, to run that method:

IConnection connection = ConnectionFactory.GetConnection(connectionString, connectionType);
SqlConnection conn = (SqlConnection) connection;
DataSet ds = conn.RunXML(xmlString);

Let’s continue. There are, of course, means of making this a bit more generic, but you get the basic idea. We can further decouple by using both interface and a service boundary. This is one of the reasons SOA is more than just a buzzword today. 🙂 Okay, for some, who do not understand, it is still a buzzword.

In this instance, the factory is moved over to the service side and our code calls the service to complete our work. NOTE, please, that I am not suggesting you should change all of your standard calls in your application to service calls. There is a trade off between coupling and performance. There are instances, even on a single machine, where a service call makes sense.

You then move the all of the datacode completely out and end up with something like this:

DataSet ds = service.GetData(xmlString);

The exact method call is unimportant here. Just the fact that we are calling a service. Now, on the service end, it might be doing this:

IConnection connection = ConnectionFactory.GetConnection(connectionString, connectionType);
SqlConnection conn = (SqlConnection) connection;
DataSet ds = conn.RunXML(xmlString);

or perhaps this:

IConnection connection = ConnectionFactory.GetConnection(connectionString, connectionType);
SqlConnection conn = (SqlConnection) connection;
string commandString = GetCommandStringFromXML(xmlString);

What you have actually encapsulated in the service is unimportant as long as you agree that an XML string turns into a DataSet inside the black box.

Problem 2: Generic Interfaces

In the last few paragraphs of the last section, we ended up with a method that seems to work, namely:

public DataSet GetData(string xml) {}

But does it really solve the problem? Think about this for a second. What happens if you start with this XML format.

     <name>Moby Dick</name>

But, you later decide to change it to this:

  <book id="1">
     <name>Moby Dick</name>

The method call does not change, but the XML string does. You are no longer tightly coupled, but you have left yourself in a situation where changing an XML format can blow up the communication between applications and you do not even find out until you run the client.

Okay, so you are saying "I would NEVER do that." Perhaps not, but someone down the road may think that an attribute is more efficient for ID or vise versa that it should be an element, for consistency, and not an attribute. Either of these situations will cause a run time error … or not. In some instances, you may have code that obfuscates the error like so:

XMLDocument doc = new XmlDocument();

    //Just trying to avoid an error so my boss does not get pissed

In this case, the blow up happens on the client when crap is returned. Or worse, it is properly formatted as the correct DataSet, but there are no records. So, everything seems to flow correctly, but you are not getting the correct data.

My preferred method, is to set up signatures that cannot be changed willy nilly. Or, if you do use XML, set it up so it adheres to a specific schema (or DTD). Then you can check the schema and return a proper message if it is wrong. I also recommend that you set up unit tests on all of your methods to ensure you realize when you break an interface. This is especially critical when you go generic with a string type to hide XML, or other similar non-specific constructs.

I fired this at someone and he said "but how can I test my web methods for type"? There are a couple of ways, like nUnitAsp (not actively supported) or something better like treating a web method as a UI element and throwing all of the code into libraries. From this:

public DataSet GetData(string xml)
    //Do some work with XML here
    return ds;

to something like this:

public DataSet GetData(string xml)
    return ServiceLibrary.ServiceClass.GetData(xml);

namespace ServiceLibrary
    public class ServiceClass
        public DataSet GetData(string xml)
            //do some work with XML here
            return ds;


Don’t get bogged down on the naming here or the fact we have not added the testing against schema, etc. The important concept is your service is a UI "page" and calls a library for the work. Very testable.

Random Thoughts (MVC, separation of concerns, & unit testing)

I decided to add this here, not because it fits, but because I am thinking about it right now. I know that is a bad reason and I should blog elsewhere, but it "kind of" goes with what I have gone through above.

One of the goals of the MVC Framework is testability. If you use a controller, you can test all of your functionality. With a simple "fake", you can even ensure the correct view is instantiated. As this is an aside, no code here.

But, you can accomplish much of the same if you treat your UI as a veneer and do all of the work in libraries. Rather than code like so:

//Event handler for button click
protected void Button1_Click(object sender, EventArgs e)
    //Do work here

you go to something more like this:

//Event handler for button click
protected void Button1_Click(object sender, EventArgs e)
    //pull values
    string one = TextBox1.Text;
    string two = TextBox2.Text;

    bool success = EventHandlingClass.HandleSubmit(one, two);

public class EventHandlingClass
    public static bool HandleSubmit(string one, string two)
        //Do work here
        return isSuccess;

Once again, the names are there to show what kind of work is being done, etc. Do not focus on them; focus on the concept. The more work done in libraries, the more you can test. For me to test, I now create something like so:

public void TestHandleSubmit()
    string one = "";
    string two = "";
    bool actual = EventHandlingClass.HandleSubmit(one, two);

    Assert.IsTrue(actual, "Value returned was not true");

Don’t get bogged down in the test specifics either, as everything in this article is make believe. The takeaway here is you know the method is working, or not working. If you find a bug, add a test to confirm the bug. If the bug cannot be confirmed via a test, you have a UI problem (pulling wrong value?), which is fairly easy to fix, even if you cannot test UI, as you can easily determine which value is causing the problem and quickly find something like this:

string one = TextBox2.Text;

Do you still have to test UI? Certainly. You might automate some of this testing, as well, through a variety of tools (nUnitAsp, Team Test tools, etc.). But, the more code in libraries (even if you are coding facade libraries between UI and business layer), the better.

Too many subjects in one post? Probably.

Peace and Grace,