Performance is Overrated

I see a lot of questions on the groups about performance. While I admire the push to have a good performing app, or to learn new algorithms, performance is not the only aspect of development.

Several years ago I read an article on about ASP (traditional ASP, that is) performance. The author, who shall remain nameless (mostly because I do not want anyone to track him down? ) stated that you can get performance gains by deleting all comments from your ASP code. There is nothing incorrect in this statement: you CAN get better performance in ASP by removing all comments. The exact amount of gain is in the millisecond range, but it is a performance gain. The problem with his statement is not its correctness/incorrectness, but that in 99.9% of the time sacrificing maintainability for performance is a bad idea.

Now to the ASP.NET world. In ASP.NET 1.1, removing comments gave you absolutely no gain, as you precompiled the pages. In ASP.NET 2.0, it might give you a small gain, as the time reading the source lines may take a tick or two. I would not, however, remove comments from ASP.NET 2.0 for this reason (although I might remove them from a production server if I were using the "on the fly" compilation and storing all of my source on the server). If you publish your site (aka, precompile and package), there is no reason to remove comments.

When to Tweak

Before getting firmly into this subject, let’s understand smart versus stupid performance tweaking. There are certain items that we know we should always change. For example, take a look at this code (yes, this IS a real world example of what I have recently seen done):

Label1.Text =

"Welcome " + user.FirstName + " " + user.LastName;

Strings are immutable in .NET and it is expensive to concatenate strings in the manner above. Anyone who has been around .NET for awhile knows that we should move to a StringBuilder, like so:

StringBuilder builder = new StringBuilder();
"Welcome "
" "
Label1.Text = builder.ToString();

In this case, you get a performance gain without losing anything. The code is just as maintainable as the previous example (maybe more so) and the performance gain can be tremendous, especially if this code is in a loop (which the above, inane example would not be).

But, there are grey areas. In Visual Basic (.NET), there are plenty of functions included in the Visual Basic namespace that help developers code. Most of these compile to .NET equivalents when compiled to IL. Some, however, do not and add some weight to your program, slowing it down by a few milliseconds. Should you, therefore, write everything in the Framework equivalent? The answer is "that depends". Do you really need those extra milliseconds (i.e. do you have a performance problem or forsee one at scale?)? If the answer is yes, you might want to write everything in the .NET way and avoid Visual Basic shortcuts. If the answer is no, and using Visual Basic routines helps speed development and make it easier for them to maintain it, then you probably should allow the slight performance hit to decrease time to market and time to fix bugs.

Another example. A few years ago, I worked on a project with EBCDIC files (EBCDIC is a "legacy" codepage used by mainframe systems). We had to translate EBCDIC to ASCII. This can be done with StreamReaders and simply changing code pages. For performance, we coded in the binary world, however. Was it necessary? In this particular application, the files were multiple gigabytes in size and had to go through multiple steps to normalize for our database. Changing from string streams to binary streams reduced the time to translate from about 20 minutes to less than 2. While there were still steps that required string manipulation, moving as many steps to binary as possible lead to huge performance gains in an application that required them. The downside is the code was far more complex, as the developer had to think in bytes to understand what was going on. If our files were much smaller and the performance need were lessened (due to smaller files mostly), I might opt to stay with string manipulation. It would really depend on the level of understanding of my developers.


The point I am getting at is you have to look at the big picture. If you want to learn faster algorithms, I am right there with you. If, instead, you are trying to eek out every ounce of performance from code, strictly for performance, I have to concur with that decision. In the long run, maintainability costs companies far more than performance. It is far cheaper to buy a few blade servers or add CPUs or memory than it is to load your development team with rockstars.

When you look at your app, you should weigh performance in with other aspects. If you find items like string concatenation, which are no brainers, fix them. But, if you want to know whether you should tweak code, you should baseline your current performance and test for expected load before switching all of your file access to WinAPI calls.


One Response to Performance is Overrated

  1. Patrick Altman says:

    For more details on EBCDIC and how you can soon start leveraging SSIS to import EBCDIC data directly into a SQL 2005 database with only a single trip through the data visit:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: