CodeIt Right!


This is a first blush of a product I have tried out called Code It Right! from SubMain:
 
I will have to spend some more time with the product to give a more full featured review, as my circumstances are a bit unique.
 
First, I will start with what I like about the product. It folds into Visual Studio. Okay, so a lot of products do that. But, it also feels a lot like Visual Studio. In fact, if you routinely use Visual Studio Team System, it will feel a lot like the test loaders.
 
It is also very easy to use. In this instance, I am using Code It Right! on a legacy project, if you can call anything .NET legacy, that is. The project was first designed in .NET 1.1. To start the tool, you simply use the CodeIt.Right menu and choose Start Analysis. You then end up with a list of violations in the code. In this project, it looks like this. Ouch!
 
MainScreen
 
In many ways, the product reminds me of FXCop, except it goes through and corrects the code.
 
Before going deeper, I have to state that this is a very nasty project and most of the code is contained in the ASPX pages. Many of the rules here go contrary to the norm for naming in a website project, even though I would agree with what I saw.
With that disclaimer, I did find a couple of issues with the product. I cannot fully repro them, but when I do I will send them in to SubMain to get corrected.
 
The first issue was renaming of classes. In a website, Microsoft names the class by the convention folderName_aspxFileName.lang (where .lang is .cs for C# and .vb for VB.NET). This goes against the rules. I found that Code It Right! did not succeed in renaming, as it missed the <@ Page declaration. It is an easy fix to correct these, of course. It does seem to catch all of the code file changes.
 
One other glitch I found was in the product incorrectly commenting the wrong point in the code. I think this happens only when you apply changes individually, and may only happen when you apply changes it was not planning on making. In short, I believe if you apply the fixes all at once or re-analyze the site after applying selected changes in a file, you will not have an issue.
 

Other Features

Some other features, other than analyze and fix, are the following:
 
  • Go to source code where problem lies – this one is expected
  • Report view – This is a basic summary view of problems, but it does allow you to view the data in a variety of ways (and even export to Excel).
  • Configurable rules – I have not played with this feature yet. I did look at the SDK documentation, however, and it appears fairly straightforward.
  • Can exclude/include rules and even set them up in different profiles for different groups/projects, etc.
  • Some refactoring – primarily this is the abilty to fix errors by adhering to proper design patterns for .NET. I had to purposefully code in one of these problems to play with it, so it is not a big problem area for me.

My Feelings

This one is a bit of a mixed bag for me. Overall, I like the concept. As I have stated it reminds me a lot of FxCop with the ability to fix things.

I am not sure about the pricing structure, although I can certainly see this paying off for companies. In this one legacy project, it probably saved me a couple of hours or work with FxCop. At my pay grade, I came close to what it would cost to buy the basic version. And this was only one of a handful of projects created at the same time.

I would definitely like to see the rules understand a bit more about ASPX. This project, as stated, is unique. It is also done counter to my normal push of separating logic from the UI. If I were using this product daily (which I might now), I would opt for a couple of rules.

  1. Supress some of the naming rules when working with ASPX pages in a website (not sure about web applications, but I think they use namespaces instead)
  2. Check in prior to trying to clean a full project – this is good sense no matter what tool you are using
  3. When running selected fixes in a single file, open the file and make sure the changes do not hit the wrong section. If so, unapply the bad ones and then reanalyze prior to trying them again. When I have a more complete repro (and time to repro), I will make sure this one is logged with SubMain, as I see this as the biggest glitch.

If the value of a tool is based on what it saves a company, I believe this one will hold merit for many, if not most companies. I have not checked to see if there is any direct competition with the same features, however.

Peace and Grace,
Greg

Advertisements

Refactor: now = immediately


I was talking to a business colleague who stated that it is nice to refactor immediately, but the reality of business is you do not always have the time. I agree with the idea on the surface, but there are so few refactorings that should wait until some later mystery date.

My point in case is the project I am working on now. Built two years ago and partially refactored. We had one group coding in VB and another in C#. While this works, even in a .NET 2.0 website, it is not an optimal situation for so many reasons. I did complete most of the refactor (operation Kill VB) about a year and a half ago.

Today, we are moving sites from one collocation facility to another. I am now finding what a chore it is, largely because we left a lot of bad code in the site. Everything is in the config, but it is a royal mess. And there is precious little documentation on the system.

Now, I have heard Corey Haines talking about comments being evil, and I agree with his premise. If you need comments, your code is not clear. I also think you can reduce the amount of technical documentation with a quick refactor. You still need to write some documentation, but you can greatly simplify things.

As it stands in this project, there are three different connection strings, all pointed to the same database. One extra connection string was necessary, as certain elements were coded using .NET Tiers in CodeSmith. Sure, I could have whacked at the classes in .NET Tiers, or better yet switched all of my connection strings to the same name, but it was not done at the time.

Here are a few rules I think you should stick by firmly:

  • Always remove dead code as soon as it becomes dead. Dead code not only affects performance, especially with JITting, but it also leaves points of confusion in the code.
  • Configuration files are code files, so the above rule applies to them, as well. There is nothing like searching through a config with tons of garbage used for nothing.
  • Get rid of globals, except where absolutely needed. In most cases, wrapping them in a Singleton makes the most sense, if you need something global.
  • Learn to feed methods. Assume they are dumb. This is one surefire way to get rid of globals in a class (module level variables). When you pull from outside the method, you create a potential point of error.
  • Reread the last rule and make sure you NEVER do this across class libraries. One of the things that absolutely pisses me off is when a library I have referenced pulls directly from the web config file. I can allow a bit of leeway if it is a web control, but having some vendor library yanking a mystery variable out of thin air is very hard to debug.
  • Always eliminate duplicate code. This is the most common and also the stinkiest of the code smells. If you are using the same loop, branch, algorithm, etc. more than once, move it to its own method.
  • Get your code out of your ASPX pages. And, no App_Code is not an acceptable dumping ground. The ASP.NET “application” is a user interface. The pages, and classes, within should not be calculating user discounts. If you have to do this because you are used to it, make sure you leave time to undo it when you go to creating your test build. This pays off in dividends in numerous ways, including a) it is unit testable and b) you can swap out the UI for the latest Silverlight build without a huge amount of work.
  • Warnings are errors. This one is overlooked all the time, but it is critical to get rid of all warnings, as they are signs of weaknesses in your code. If you find an exception, don’t fret it. Rules are meant to be broken some times, but if you have more than a small handful of warnings, you are making WAY TOO MANY exceptions.
  • Be consistent in how you set up files, including config files.  You need to be able to find stuff. ‘Nuff said.

I am sure there are others, but these were the most glaring on this project. And, yes, some of them were my fault. I am a reformed non-refactor coder … and I have a ruler for your knuckles.

Hope you can use some of this today.

Peace and Grace,
Greg

Fireproof, The Movie Review


Tiffany and I went to see the movie Fireproof last night. As we sat down to wait for the movie to start, it felt like a Harpeth Heights Bapstist Church reunion on a non-church day. At least half of the people in the audience were from our church. As our Executive Pastor had “pimped” the movie, it did not surprise me. What was the most surprising to me, however, is how little business theaters are doing today.

About the movie

The acting will not win any Academy Awards. It does not have the lighting, the editing, or the acting to win awards. On the other hand, it is better than the acting in Facing The Giants, the previous success from the Kendrick brothers. The lighting and film work are also better.

The movie is also a definite Christian film. It is unlikely to draw the big audience that Passion did a few years ago. I also do not see it matching the box office Facing the Giants made. This is not saying it is a horrible film, by any means. Just that it has a definite audience in mind.

Like other Christian films, there is the obligatory evangelistic scene. In this film, it works for a couple of reasons. First, it is between a father and his son. You can see a father talking to his son in this manner. Second, it takes place over a large amount of time. Third, while there is a final overt message, the conversation primarily beats around the bush, as one would in real life. Finally, the conversion does not change everything instantly for the better. In fact, some of the deepest turmoil in the film takes place just prior to the resolution.

There are some scenes that are quite predictable. In fact, if you have watched any films, you will know the devices by heart. Some are from so far out in left field, they would require ignore them coming at you to have them hit you by surprise.

On the positive side, the film deals with marriage in a realistic manner. Subjects like dual marital accounts, infidelity and pornography are not taboo. Like with many married people, these subjects rear their ugly heads and require effort from the individuals involved to solve them.

Will you like the film? I guess it depends on what you expect. If you are looking for the big budget Hollywood look, I would say no. If you are looking for the Sundance Indie, I would say no. If you want an overtly Christian film, jump on for the ride. If you would like to examine your own marriage, it might be a good starting place.

For the atheist, if you can get past the overtly Christian themes and scenes, the film has some good moments. And, I believe anyone who has a struggling marriage can take something away from it, even if he disagrees with the Christian message.

I would like to see a Christian film one day the presents the day to day life without the churchy talk and attitude. Fortunately this film does cover some things in a non-churchy way; unfortunately, those moments are not the bulk of the film.

 Strong Christians do not spend all of their time being churchy. And it would be nicer if we could stop hiding the elephant in the room by ignoring it is there. I truly believe the church today could be more effective if it would handle the tough issues straight on rather than ignoring them.

Peace and Grace,
Greg

Solving the ProfileCommon ambiguous error


It is always fun when you stumble on some interesting error. In general, it is something that wastes too much of your precious time, but it is still rewarding to solve the problem. This is one of those.

The Problem

Today, I was updating a site to get it running on another server. I spent the end of the day yesterday working with new versions of a vendor’s libraries, so today we were merely deploying a site that was working on my local box to the server.

I had a couple of issues that were expected.

1.       One of the older sites referenced the 1.0 libs for AJAX (for .NET 2.0)

2.       Some tags from 2.0 were incompatible with 3.5

3.       The vendor libs had some changed interfaces

There were others, but once I got through the usual suspects, I found this one:

Compiler Error Message: CS0433: The type ‘ProfileCommon’ exists in both
‘c:WINDOWSMicrosoft.NETFrameworkv2.0.50727Temporary ASP.NET
Files{more stuff here}App_Code.DLL’
and ‘c:WINDOWSMicrosoft.NETFrameworkv2.0.50727Temporary ASP.NET
Files{same stuff as above}App_Code.z9v2fk16.dll ‘

I went through the normal Microsoft stuff. I deleted the ASP.NET Temporary files (more on this in the next post). I went through and restarted IIS. I recompiled the site and republished (using Publish Site in Visual Studio). I switched back to 2.0. Nothing.

I then did a search for ProfileCommon in my project and the first reference I found was flagged by Resharper as an Ambiguous reference. But it was running on my local box without a problem. So I searched to see if perhaps the vendor had coded his own ProfileCommon object anywhere. I would have never done this had there not been a new version of the library, as I would have known there was nothing wrong.

Side Note: If you have not looked into resharper, or competitive tools like CodeRush, you really should. Not only do these tools give you shortcuts for code blocks and refactoring aids, they will help find errors like this one. The site was running absolutely wonderful on my local box, despite the ambiguous reference issue. Without resharper, it would have taken much longer, and more pure trial and error, to fix the problem.

Check Google again. Nothing. Then I stumbled on a post where a person stated something like “when I delete the Profile section of the web.config, I am fine. But when I put it back, it has the same error.” Ah, a lightbulb lit in my sick little brain.

Solution

This is what you really want to see. ProfileCommon is a class autogenerated by the framework bits. It is generated when you add a Profile section to your code. In this case, here is my profile section:

<profile enabled="true" defaultProvider="MySqlProvider">

      <providers>

            <add name="MySqlProvider"

         type="System.Web.Profile.SqlProfileProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"

         connectionStringName="MyConnection" applicationName="MyApp"/>

      </providers>

      <properties>

            <add name="SecondQuestionName" type="string"/>

            <add name="SecondQuestionAnswer" type="string"/>

            <add name="FirstQuestionAnswer" type="string"/>

            <add name="FirstQuestionSalt" type="string"/>

            <add name="SecondQuestionSalt" type="string"/>

      </properties>

</profile>

I have taken some liberties with the naming (obfuscating clues to the actual app), but you can see that the point is to add elements to the user profile for an encrypted set of security questions. This is an extension on the standard provider. I must add a note here, not because it is relevant to the solution, but because it is important to people seeing this and wondering if I would do it this way today.

The simple answer is no. I find that a custom provider is a much better model than mashing random stuff in the ASPNET_Profile table. I have written about this before. Since I am not adding random bits to profile or different quantities of bits, having the automagically generated stuff is not really in my best interest. At the time this was built, 2.0 was brand new and I had a junior developer to work with, so using as much stuff out of the box was a good idea.

Now, back to our regularly scheduled error. The solution is to delete this section:

      <properties>

            <add name="SecondQuestionName" type="string"/>

            <add name="SecondQuestionAnswer" type="string"/>

            <add name="FirstQuestionAnswer" type="string"/>

            <add name="FirstQuestionSalt" type="string"/>

            <add name="SecondQuestionSalt" type="string"/>

      </properties>

 

You then attempt a compile, which fails due to this line in the code:

ProfileCommon profile = (ProfileCommon)ProfileCommon.Create(user.UserName);

 

And, you then put it back and compile again. It works.

More Notes

I am not 100% sure why this works, but it appears that something in the system creates a class. You then dink around with code, jump framework version, whatever, and the class does not go away. But, the Framework does not recognize the original class (from changing versions?) and creates its own. I think this has something to do with the name of the class, which appears to be somewhat random (perhaps hashed names?).

The exact why is not important. What ends up happening is the original autogenerated class is destroyed with the failed compile and when you successfully recompile the only class autogenerated is the new class.

Now, I would think this would be solved by completely clearing the Temporary ASP.NET files on both the server and the publishing development machine. This is not the case. There is also no code in the source tree. So Microsoft is caching these bits somewhere else. Where? Not sure yet.

One idea is it might be in memory. If so, a reboot would work.

Another idea is .NET has some other temporary location for files or alters the solution file on a compile or something. I do not have the time to test either of these theories, so you can have at it. It could also be something in the Temp files for Windows.

The short story is one way to defeat issues with autogenerated stuff is to remove the trigger and then replace it after a failed compile. I may, some day, have time to figure out exactly why this occurs, but tomorrow I have another site with the same error to fix.

Peace and Grace,
Greg

National Punctuation Day


A short time ago, I wrote a blog entry about National Childhood Cancer Awareness Day. At the time, there were only 9 hits on news.google.com for the day, with only one actually on National Childhood Cancer Awareness Day.
 
Now we have the irony coming out. Today is National Punctuation Day and there are 28 news hits, with 7 from today. While I am fond of punctuation, just like any other thinking human being, I am not sure I would rank punctuation as more newsworthy than Childhood Cancer. Perhaps it is just me.
 
Peace and Grace,
Greg
 
My daughter’s site (Princess Miranda, cancer survivor)

Hidden Coupling in .NET Applications


This post deals with ways we couple applications and components together without realizing it. In this post, I will focus on a couple of things to watch out for and provide a couple of design hints to avoid coupling.

Coupling through objects

One way we couple is through the objects we use. You do not find this as much in the SOA world, as the objects passed are attached to the service, via the definitions in the WSDL. You are then free to change the underlying objects, as long as the public interface does not change. This is not completely true, as you can adorn objects in WCF, which makes the interface less immutable, but the basic rule holds true. Anything published should remain the same, at least on that service call (or until you can completely deprecate the method).

But there are instances where one writes a custom method of talking to an application. While this is not extremely common in .NET, I have seen it in some vendor solutions we have. On this public interface, you require an object to be passed. And, if not set up correctly, you not only expect an object, but you expect the object from the 2.04.56.2375 assembly version. Ouch!

The easiest way to avoid this is already covered in paragraph one of this section. Use a web service of some type (ASMX, WCF, etc.) to hide the actual object type. This forces the user to create his own implementation of the object, but you can supply that library. In many cases, the client object does not require all of the FUD in the server object, so this is sane.

Coupling Through Config

This one is a dual edged sword, as there are certainly instances where pulling directly from configuration is acceptable. Microsoft, itself, has many objects that are coded to pull directly from configuration. I have found that this idea limits the usefulness of the objects, but the objects are designed for web applications, so it is okay. I would still argue it is better to pass information to the objects and make them stupid.

In the case of Microsoft, the majority of the objects pulling directly from configuration use common bits of information, like a connection string or a part of the configuration that is automatically created for you when you first create a site. These items are less likely to cause a problem.

But let’s consider a “what if” scenario. It is a real one, as I just got bit by it. Suppose you needed to send a message to a service and you got back this error.

Object reference not set to an instance of an object.
Source: COMPANY.BRAND.Protocol
Stack Trace: 
   at COMPANY.BRAND.Messaging.Message.AddToDispatchQueue(IPAddress ip_address, Int32 port, String device_address, Int32 timeout, Int32 retry_count, Boolean cancel_on_retry_limit, Boolean duplicate_check, Boolean halt, DispatchMode mode) in P:\BRAND\Matchbox\trunk\lib\Protocol\Messaging\Message.cs:line 148
   at _Default.Button1_Click(Object sender, EventArgs e) in d:\projects\websites\ColoTest\Default.aspx.cs:line 37

Buried deep in the code base is an object reference. This could well be a problem with any number of object calls. Proper troubleshooting says look at the source code. But, what if you don’t have source?

Next step would be to look at the help documentation, but what if there is none?

You can also go to pulling out Reflector and examining the source. As a last ditch, you could examine the DLL in an editor and find the IL.

In this case, I decided to copy the entire configuration file from a working website to this test application. Sure enough, things started working. Thinking it through, it is a call to a database table that is the issue. The code probably looks something like this:

SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["GatewayConnectionString"].ConnectionString);

Instrumentation and Logging

This is a broad topic, as there are a bunch of ways to skin this cat. In anything you develop, there should be a way of examining the system to see what it is doing. This includes both metrics (how many messages are traveling through each second, for example) and the ability to trace whether messages are going through correctly.

One of the easiest ways to instrument is to make sure all messages are stored someone so they can be looked at. This can be individual files or in a log file. It can also be in some other persistent store. If you are using a storage mechanism, you better make it configurable so you can shut it off. As an illustration of why this is important, examine the following log folder. These are logs for a single unit communicating to this backend:

 logs

Yes, that is on the nature of 11 MB of information, per day, that nobody is likely to ever look at. In this case, it is not that bad, as the system is still in test. But if this were a production system, it would produce files more to the tune of 35 MB of raw data, 23 MB in messages, 3 MB of warnings (not sure why these were never ferreted out) and ? MB of subscription messages. This is fine for development and testing, but ridiculous for a production system.

Summary

Be careful where you place your calls to config. As much as possible, try to feed the objects rather than have them pull from the system silently. In addition to making errors easier to spot, this one tip makes for a more testable system, as you can send a mock in for that particular object in your test harness. And you know how I love unit tests. Open-mouthed

You should also make sure separate systems talk to each other in messages. If there are objects, they should be defined in the contract and not tied to a physical implementation of a particular library, as you may end up with versioning issues. As mentioned, WCF and ASMX both solve this problem.

Finally, make sure you have a way to find information about your system. While the focus, for most, is on finding problems, there is a good reason for knowing what is happening when everything is going right.

Peace and Grace,
Greg

Working with NULL fields in a Strongly Typed DataSet


This entry is here primarily because I have seen this question come up quite a few times. Perhaps when people Google, they will find this entry and avoid asking the question again.
 
With a strongly typed DataSet, a method is created on each field that can possibly come out of the database as a null. It is Is[Field}Null(). For example, if the field is named Title, the method is IsTitleNull(). So, if you would normally use the following code:
 
string title = ds.Employee[1].Title;
 
or similar, try this:
 
//You can set this up however you need to
//This is simply an example of one that has to display something
string title = String.Empty;
 
 
if(!ds.Employee[i].IsTitleNull())
{
    title = ds.Employee[i].Title;
}
 
The other option is to edit the DataSet(s), but if you ever regen, you are hosed again. In other words, editing the DataSet is is kludge. Don’t do it. And, yes, I have seen it done … lots of times.
 
Peace and Grace,
Greg