The Dangers of Refusing to Shift Paradigms and Other Development Mistakes

I was examining some old code I still had on a hard drive and it made me think of this topic. This post can also be applied to some recent assignments I have had, so it is applicable. Even with a nearly 10 year old technology, there are still people writing code that would be better served by its predecessor. In particular, I am focused on ASP.NET and traditional ASP. 

The main thrust of this post is you have to make paradigm shifts in your life, as new technologies come out. In this post, I want to focus on ASP.NET in particular, as it has been a hot bed of people that have refused to make the paradigm shift from script based ASP code to compiled function-based (or event driven) ASP.NET code.

My first foray into ASP.NET was in 1999 or 2000. It was an early alpha build offered up for Microsoft MVPs. Shortly after, I got in touch with IDG, which became Hungry Minds, which was bought out by Wiley (who wanted Dummies) and wrote a book ADO.NET and XML: ASP.NET on the Edge. As I was writing, I quickly discovered I was writing my code in classic VB 6 style. To get into the .NET paradigm, I forced myself to learn C# (now my MVP designation) and then write the book in VB based on what I knew.

Most of the first ASP.NET books were atrocious, as author after author was doing the same thing. Most of the code was written like so:

    Dim connection as New SqlConnection("string here")
    Dim command as New SqlCommand("command here", connection)


    For Each (row as DataRow in ds.Rows)

There were few books that even covered code behind and even the few books that covered code behind had one chapter. And this includes books from some authors who have written many great books about Microsoft web technologies.

Personally, I had to fight the editor on two points:

  1. The code should primarily be written in code behind – I lost this one overall, but had plenty of code behind samples in the later chapters when less attention was paid
  2. I wanted to code in Visual Studio .NET – My compromise is the sample code was done both with a Visual Studio .NET solution and a make.bat file to compile the solution
  3. I wanted to write the main code body in class files – Marginal success here despite the argument I wrote most of my traditional ASP applications with the majority of code in VB COM libraries hosted in MTS

You can probably still download the code at the Wiley extras site, if you are curious. But this is not the point, as most of the books actually had decent code. They just missed the paradigm shift to code behind. Let’s take this a step further.

Missing the entire paradigm shift

As late as three years ago, I was fixing a website written for my company by a vendor with code like this:

protected void Button1_Click(object sender, EventArgs e)

      foreach(DataRow row in ds)
          //… Section writing Header Row Here


          while(counter < columnOrder.Count)


            //… repeat for each column




This was an improvement over the previous version found in some pages, which used this format:

            outputstring = outputstring + "tt<td>";
            outputstring = outputstring + row[columnOrder[counter];
            outputstring = outputstring + "</td>rn";

      //… finish entire grid here


This page had the extra special benefit of being paged with a custom paging logic, all written out with concatenated strings. It was a nightmare to debug errors in as you had to examine the output in a browser and determine where it was wrong. In most cases, this meant putting the HTML through a checker or in an HTML editor program to find where it was broken. And since the JavaScript was also output in this string, rather than emitted (ASP.NET standard), you had to debug by running it. Ouch!

The code above was actually not completely horrendous in traditional ASP, as you had to loop through Recordsets to get data out on the page and Response.Write strWhatever was fairly standard code. The better developers would move the code into include files, but the model was interpreted, not compiled, so many of the kludges (from an ASP.NET standpoint) were dictated by the ASP model.

Not Invented Here (NIH)

I have seen this paradigm mistake taken to the nth by those who have the Not Invented Here syndrome. This syndrome is present when you have the smartest developer in the world working at your location. In general, this guy is pretty good, but has a serious phobia to anything that cannot be hand coded in notepad and a strong disdain for anything drag and drop. Rather than access what makes sense, he/she will rewrite everything. I am sure each of you has met this person at some time, and generally this person, through tenure, has risen to a high level in the company and is a strong silo of knowledge. As the current development efforts would tank without this person, he/she is normally treated with kid gloves. Clues that this person is present are (at least one of these):

  1. Everything that is simple in .NET has been rewritten (due to a lack of trust of visual designers?)
  2. They have coded their own custom method of output (rewrote ASP.NET delivery, for example)
  3. There are custom user or server controls for all of the standard Microsoft controls that ship with the framework
  4. Every ASPX page is thin and contains at least one user control
  5. Most pages are rather simple, containing only the page directive and a single user control (although none of the user controls are reused, everything is a user control)
  6. There is a complex hierarchy of container controls to help create the UI (in lieu of built in containers like panels, et al, or a use of divs with CSS for formatting (true separation of layout and tags))
  7. The entire visual part of the page is found in C# code in a library that write directly to the Response stream

In general, the code is very complex and advanced. It is quite obvious the person is very smart, as far as coding. And there is often a tendency to throw in every new technology, but not trust any of the helpers available.

These are some things I have seen in different projects I have had to clean up or have observed in client locations. Some are unique to a single client, while others have appeared time and again. The core takeaway is any time you see someone rewriting a core technology (ASP.NET, BizTalk, web service transports, etc), you should see red flags. In some rare instances, you might be wrong in your caution, but more often than not, rewriting a core system or technology from a major vendor indicates NIH and should be seen as a danger sign.

I will have to code a sample similar to some NIH implementations I have seen over the years. The code is most often NOT maintainable and the the founder of the code berates every newbie to the team. The assumption being that their “not getting it” is not seen as an artifact of manufactured complexity, but rather to their inferiority to the initial coder.

NOTE: If you are in an environment where you see all of your co-workers as idiots, it means one of two things. They are idiots and you should find a job with non-idiots. Or, more likely, that you are over-engineering your code base to the point it is becoming a detriment to your employer.

Note, if you are observing this person, that the person writing the overly complex code is often trying to ensure his/her job, consciously or unconsciously and that this is often a compensation for a low view of self. Then again, sometimes you have an egomaniac with sociopathic tendencies.

Is Complexity Bad?

Since I have strayed from the paradigm shift, I guess I have to state that complexity is not necessarily bad. When there are SLAs for performance, scalability, etc, you often have to create rather complex code to solve the problem. If the project is sane, this code is encapsulated in a library, however, and the rock stars in the organization have control of the code base. The rest of the team then consumes this complex code as a black box.

So, complexity is not bad, but overly complex systems that force a company to only hire rock stars are, in general, a sure sign that someone has grossly over-engineered the system.

In his book, C# in Depth, Jon Skeet gets into some rather complex topics. Not necessarily true, as they are not too complex, as presented. But, there is definitely a learning curve to concepts like lambda expressions, using func as a constructor argument, or properly using generics.

One thing he does show, however, is how taking the learning curve helps you write code that better shows intent. So, using generics, or lambdas, can make code cleaner, more concise and even easier to understand. If you see these constructs in a project that is very hard to understand intent, the person coding it is … well, doing it wrong.


Since it is late and I had a hard time sleeping last night, I am dealing with two points here. The first, and main point of the entry, is you have to shift your thinking when new technologies come out. If you continue to write COM in .NET, you may end up with code that works, but it will not be easy to maintain. If you continue writing the old way, and do not make the shift, you are not doing it the best way. Or, to cut through the crap and be blunt, you are doing it wrong.

The second is advances in languages are included to help you write cleaner code that is easier to understand. They are tools to make intent more clear. If your code is less clear, you are doing it wrong and need to reassess what you are doing (refactoring is your friend).

As an auxiliary point, drag and drop is not a bad thing. In the case of ASP.NET code, it helps you quickly build up the visual part of the page (the tags). Even in data, it can help you work RAD into the equation and quickly build sites. You do have to understand the underlying assumptions and use what helps you work faster towards quality and pitch out what creates bigger headaches.

As an example of drag and drop that helps, but is generally considered bad, consider the DataSet. There is so much code that comes from dragging and dropping that many developers detest the code.  But, if you look at it logically, it is much quicker, especially when you first develop the site, to use what you are given. When you reach later cycles, you can refactor out the garbage and do it “right”, but you do this in the optimization phase. Hand rolling may give you the “proper” implementation the first time, but a simple highlight and delete yields the same result and takes a fraction of the time of handwriting the data container.

Does this mean, you should use DataSets? In high volume Enterprise sites, probably not. And, hand rolling some things is not necessarily bad, and is often good. But you should ask “is this is first concern or something I can refactor out later?” before running with the “anything that can be dragged is a bad thing”.

My personal feelings? In general, everything Microsoft has requires a translation layer to get to domain models, especially if you have a heterogeneous application stack. Most of the drag and drop, while it can perform quite nicely (although not LINQ to SQL at the Enterprise level), adds extra FUD you don’t need in your Java apps. Just my two cents.

Peace and Grace,

Twitter: @gbworld


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: