SourceSafe web projects and refrenced libraries

If you have not yet experimented with SourceSafe 2005, it has some neat features that make it much better than SourceSafe 6.0. There are also a few points to watch out for.

We are currently working with a company in Austin for a new product. They worked primarily with the mappin pieces of the application, so we were not running into each other up to this point. We are now working on Integration, so it seemed easy enough to make SourceSafe 2005 available for the web. Here are a few points on why this is not as easily done as said.

SourceSafe 2005 web projects are available for people who develop locally by simply adding a web version of the project. You switch Visual Studio to open from web and create a new web version based on the internal version. If you are working with complete external users, they can connect the same way, as long as you tell them the internal UNC for the project, as that is how the web service connects. But, you have to have projects or solutions for EVERYTHING, as there seems to be no way to hook Visual SourceSafe up to a web version of the server (can only connect in Visual Studio). If I had my druthers, i would be on team Suite now.

The above may not seem like much of an issue, especially if you follow the normal path of having all your library code in App_Code. If you use external libraries, you have some issues. The solution is one of the following:

  1. Create a project file that contains the DLLs at the right place in the path for the refresh files
  2. WARNING – KLUDGE (works great but really kludgy). Create a /libraries folder in your web project and point the /bin refresh files to that directory. You will have to remember to exclude this directory when you publish or you will have to upload two copies of the

By the way, if I had my choice, I would be on Team System completely. And, I will likely have the option in the next month or so, but we are stuck with SourceSafe right now and have to make it work. Hopefully this helps some of you runnign with the same issues.

— GB

NET Tiers Frustration

I hate to start with a disclaimer, but I want readers to know that I have nothing against the .NET Tiers. I feel John and team have done a great job with this open source product. The following was written in the midst of switching from the beta 2.0 version to the final version and reflects a small bit of difficulty we have had over the past day. Fortunately, it is easy enough to solve. With that being said, here is the original post:

I just recently experience a whole bunch of frustration. This is normal for me, as I play around with a lot of beta software. One day I will learn … nah, won’t happen … being on the bleeding edge pays too well.

My latest frustration has been with .NET Tiers, an open source template library for CodeSmith. Part of the issue comes from changes from beta to production, which I fully understand. Rules change as developers get deeper into code. But, some of the rules are not consistent.

Now, don’t get me wrong. .NET Tiers makes a good set of classes. There are some things I would have done different, but it has definitely saved our bacon on this project, so I am happy with the classes. The issue comes with inconsistency in naming. Before getting into the issue, let’s take a step back and look at history.


Have a partner company building a backend app. They inherited tables from another company and the tables are all Hungarian in nature (tUser). On our end, we have used a more .NET/SQL standard naming convention. To avoid merging right now (due to lack of bandwidth), we have junction tables that are in the form of Customer_tUser (yeah, a bit of impedance mismatch, but the tUser table should be tCustomer).


Now we get to the inconsistency part. .NET Tiers has the following naming scheme:

  • Stored procedures keep their original name: Customer_tUser_Find
  • Objects lose underscores: CustomertUser
  • Providers (Data Access components) go Pascal case and then lose the underscore: CustomerTUserProvider

Now, try a search and replace on that. My current classes are Customer_tUser and the final version find/replace to CustomertUser causes providers to fail in C#. Not a big deal, overall. Chock another one up for beta testing.

Oh, and I guess there is a good reason to use VB.NET after all*.


If you have played with .NET Tiers during the beta, and have tables with underscores, attempt a compile of your project and fix the errors with a search and replace. If everything is not cleaned up (a potential with C#, which is case sensitive), you will have to fix the case sensitivity. The other solution is to stop playing with beta software.

Until a bit later.

* VB.NET is not case sensitive, so you do not end up with this issue.

Quality Assurance: The Comfort Syndrome

I sent out a new build of a web application this morning and just before branching the source tree was told that there was another bug in the site. The bug was minor, but that is not the point. We have been in QA for a couple of weeks and it seems every time we think we have something completed and are ready to call it a gold build, something else comes along.
This is not a unique problem. In fact, it is quite common. In organizations like Microsoft, the reduction of bugs to zero is not the release point. They, instead, wait for the "zero bug bounce". Shortly after the number of bugs goes to zero, more are found. Why?
My theory is the car theory. Remember the last time you bought a car. Prior to buying it, it seemed like the type of car you wanted was rare. Then, you buy one and they are everywhere. Everyone suddenly rushed out and bought the car you wanted to buy. Or, more likely, you were being myopic and not seeing what was plainly in front of you.
THe same is true for QA testing. When a certain type of bug is found, every single page (website) or form (windows app) is checked for that type of bug, and rechecked, ad nauseum. But, at the same time, other types of bugs are missed. Myopia strikes. After all of one type of bug are wiped out, the fog lifts and other types of bugs are found. And the cycle repeats until you either wipe out enough bugs to have a quality product (not perfect, but perhaps excellent) or you run past the quality of your QA staff.
The more experienced your QA staff, the more likely you will limit the number of cycles, as experience helps cure some of the myopia. But, it has been a fact everywhere I have worked, that more bugs will be found after you think you have every bug wiped out.
A few suggestions for fewer bugs and higher quality:
1. Have written standards, including User Interface checks, constraints and validations.
    a) If the field in the database is 100 in length, make sure they cannot go past 100
    b) If the field is a date field, ensure the textbox is a date
2. Work with testing frameworks.
     a) nUnit and Team Test are both good for the unit testing
     b) Use mocks on true unit tests
     c) Fit makes a nice acceptance framework
3. Aim for 100% coverage on tests – there are cases where you will not achieve this (serialization of custom exception objects comes to mind), but make the attempt. It pays off
4. Write a confirmatory test for EVERY bug you find
Until next time
— GB

Why ASP.NET 2.0 Sucks

I love ASP.NET 2.0. It gives me so many tools that help me do my job. But, I still think it sucks, or rather, some of the ideas placed into the framework for ASP.NET 2.0 suck.
1. App_Code
App_Code makes your job easier. You add your classes here and they compile with your web app. But, it comes at a cost. The App_Code still compiles as if it were a separate library. This means your App_Code errors bubble to the bottom of the list and you end up with numerous errors related to the fact App_Code did not compile at the top of the list, errors that cannot be solved until the App_Code errors are solved. The solution is to put your classes in a library and compile separately.
2. Fragile web.config
As more and more items get shoved into web.config, you end up with too many things that blow up your application. And, in many cases, the errors do not bubble up. This leads to an important rule when developing with ASP.NET 2.0: If you change config, compile before you do anything else. 
3. The box is too small
What I mean here is the provider model is cool, but if you use it for anything slightly different than the ASP.NET samples, you can pretty much hose yourself. The solution, of course, is to create a custom provider and add the items there. Trying to do anything else will work, but you end up with too many bugs.
4. The IDE freezes up
When you add a lot of flashiness to an IDE, you end up creating bugginess. While the features in ASP.NET 2.0 are miles above 1.1, the IDE is, in many ways, a step back. It uses a huge amount of memory and tends to freeze, lock up or even crash with more regularity than Visual Studio .NET. I have yet to find a way around this. NOTE: This is far worse with web appsthat reference other libraries, than straight ASP.NET apps.
5. Crap in the web.config
When you create a web app (at least a VB.NET web app, which I am forced to do right now ;->), some useless references are placed in the config file. Okay, they are not useless from an IDE standpoint, but they fry the production app. The way around this is to create an build script that gets rid of the extra crap that is not necessary for production. What is extra crap? Anything that causes your system to blow chunks when you publish and copy up.
6. The model is flat
If you want the fewest number of problems, bury all of the code in your ASP.NET pages and publish the code to the server and allow the compile on the fly model. Not an option for the timid, but it poses far fewer problems. It is almost like Microsoft chose to dumb down the product in this release.
Please understand that I love ASP.NET 2.0 far more than the 1.x implementations. I am mostly writing as there are some issues I have and I would like to see them fixed in Orcas, so my experience is much nicer. Of course, some of these are my own pet peeves and since I am advanced and can get around these, while a newbie might not be able to, I will probably have to suck it up … again.