Update on VHD Booting to Windows Server 2008 R2

If you read my last post, you saw I was going to set up the pre-made Visual Studio 2010 beta VHD to boot on my Windows 7 machine. Well, there is no way to make it happen. Unfortunately, the Microsoft made VHD has Windows Server 2008 and not Windows Server 2008 R2.

I guess this means I have to create my own VHD. I was hoping to avoid this, as it is very time consuming. Yeah, I know. WAH!!!

Peace and Grace,

Twitter: @gbworld

VHD Booting Windows Server 2008 R2 in Windows 7

There is a new article with particulars on my main machine. You can find it here.

I just found another thing to love about Windows 7. With Windows 7, you can boot from a Virtual Disk. I wish I had known this a few months ago when I opted to completely toast my machine to run Windows Server 2008 R2 so I could play with SharePoint and Team System.

Here are the steps outlined (I will likely edit this, as I am going to redo things tonight now that I have learned some things about this scenario). I am going to boot from the downloadable VHD with Visual Studio 2010 and Team Foundation Server installed.

NOTE: Items in italics and underlined that have brackets, {}, around them are placeholders. You will need to fill in your own information. Example {VHD file location} indicates you have to point to your VHD file. I would normally just use the brackets, but one of the command line tools uses brackets for one of the command parameters.


There are many paths you can take, some of which have no requirements other than Windows 7 (or Windows Server 2008 R2, if you wish to use it as your “host” OS).

Creating the VHD

The tools you need really depend on how you are going to accomplish the install. There are three ways to go about setting up your VHD. The first step is creating a VHD:

  1. Create a Virtual Disk using VirtualPC
  2. Create a Virtual Disk using diskpart

Of the two options, diskpart is the quickest. Here are the steps

  1. Open a Command Prompt (Type Windows Key + R then type cmd and press Enter)
  2. At the command prompt type diskpart and then press Enter
  3. Type the following commands to create the disk and attach it
    1. create vdisk file={file location} type=fixed maximum={max size in MB}
    2. select vdisk file={file location}
    3. attach vdisk
    4. exit

I generally end up with a fairly large disk size, as I need Team Foundation Server, et al, added.

Setting up the VHD

The next step is to set up the disk, unless you are downloading a VHD to boot from or already have one created. Once you have the vdisk attached, you can use diskpart to set it up:

select vdisk file={VHD file location}
   select the disk you wish to work on
list disk
   find the VHD disk
sel {disk name}
   Once you find the disk, this selects it
create part primary
  creates the primary partition
sel part 1
   Selects the partition so you can work on it
    marks the partition as active
format fs=ntfs quick
    quick formatting as NTFS
    assigns a drive letter

If you follow all of these steps, the partition is active, assigned and formatted.


  • You do not have to use a fixed type drive, but I recommend it for demoing, as it is something you have a bit more control over fragmentation
  • You have to format the disk before installing an image. If you are doing a full install from media, you can format then.

Loading Windows Server 2008 R2

This portion can be done a variety of ways as well.

Download Windows Server 2008

UPDATE: Does not work, as the VHD image is Windows Server 2008 and not Windows Server 2008 R2.

Install from Disk

The easy way to do this seems to be just installing to the disk from a Windows Server 2008 R2 disk, as it adds everything to the boot menu for you. It is also the most time consuming, as you are running a full install. Here are the steps:

  1. Insert install disk
  2. Choose VHD drive

Can’t get much simpler, but you will wait for the entire install to complete. if the VHD is not bootable, you will get a warning that it is not bootable. This does not seem to be a big deal,as the disk appears to be set active and the boot entries are added.

NOTE: If you have not attached the disk, you can install the OS with VirtualPC.

Install from .wim file

To do this quickly, you use ImageX, which is part of the Windows Automated Installation Kit (AIK). You will need the install.wim file from either Windows media or from an installation. If you need to create a custom .wim, you can use the this web page. It is necessary (best?) to have the drive attached and assigned to accomplish this.

In order to do this properly, you need to know the Index for the Image. If you already know this, you can use the number. If not, you can get it by running the following command:

imagex /info {.wim file}

The index is contained in the IMAGE tag (bolded below):

    <TOTALBYTES>{size here}</TOTALBYTES>
    <IMAGE INDEX="{index}">

To install, you use the following:

imagex /apply {.wim file} /check {index} {assigned drive number}

This should take a few minutes. On my machine it is less than 10 minutes, but I would imagine it can take longer on a slower machine.

Adding Entries to Boot to VHD

If you have attached the disk image, you need to unattach it now from a command prompt.

DISKPART> sel vdisk {path to VHD}
DISKPART> detach vdisk

If you downloaded an image or created one via another means, you are fine. The next steps work with any VHD with a correct OS on it (ie, Windows 7 or Windows Server 2008 R2 – or greater (future)). If you installed directly to VHD, you don’t have to follow these steps, as your boot is already configured.

Before doing anything, type in bcdedit and hit the enter key. This gives you a look at the boot loader entries. You shoul dhave one for Windows Boot Manager (not important to us in this exercise) and one for Windows Boot Loader.

First, let’s copy the boot load section for the current install of Windows 7. Type in the following:

bcdedit /copy {current} /d "{alias for VHD}"

NOTE: In the above command {current} is part of the command, not a placeholder for your information. The alias is something you have to configure, however.

Now type bcdedit again and you will now have two Windows Boot Loader entries in the file. Find the one with the alias you created in the last step. You will need the GUID (identifier) to do the next steps.

Run the following to set the file to boot from VHD instead of being another normal C drive entry:

bcdedit /set {GUID} device VHD={path to device}
bcdedit /set {GUID} osdevice VHD={path to device}
bcdedit /set {GUID} detecthal on

When you restart your machine, you should see both Windows 7 and your alias as boot choices.

Final Notes

I have used virtual machines for years to beta test. Unfortunately, running in VirtualPC can be very SLOOOWWWW! This option allows you to use a VHD as a bootable disk, which is quite a bit faster than running in VirtualPC. It is still not as fast as running the OS native.

There are a couple of things you should note with these options, as they may affect your demonstrations, if that is your reason for doing this:

  • Hibernate will not work in the VHD boot image. This means you have fewer power options
  • Bit locker does not work on the VHD boot image. This should not be an issue unless you are using this method to set up numerous boots and storing sensitive information. I keep that on my main OS personally. 🙂

I will follow up in this post once I have my image completely set up and running with the VHD for Visual Studio 2010.

Peace and Grace,

Twitter: @gbworld

Easing Childhood Cancer Treatment Symptoms

  • I have not written on this subject in this blog, although we have detailed much of the information on Miranda’s site:


I wanted to give out this information, as is. It is not scientifically sound, as a study of 1 is not scientifically significant. Perhaps over time we will amass enough cases we can push for a study on some of these items.

What I describe are Complementary treatments, or treatments to be used in conjunction with standard oncology treatments. They are not alternative treatments, meaning in lieu of standard treatments. Here are two things to keep in mind, one against either side of decisions on complementary treatment options:

  • Always talk over any complementary treatment with your doctor, as there are supplements that interfere with chemo (selenium and cisplatin come to mind)
  • Conversely, when you doctor states “there is no evidence that suggests it works” understand that there is generally no evidence it does not either.

    NOTE: It is unlikely, if the current system persists, there will ever be evidence complementary treatments are effective, as complementary treatments are not generally tested in clinical trials (there are currently 63 for antioxidants with cancer, but most are aimed at abating symptoms, none deal with antioxidants at the same time as chemo – on the other hand,, there are 354 trials for actively using trastuzumab (aka, Herceptin) in cancer treatment – 63 for antioxidants and 354 for 1 drug).

We found nothing in what I write about here that goes counter to standard treatment, nor did Miranda’s primary oncologist. We also found great success with each of the items mentioned. But, as I will state over and over again, a study of one is not a scientific study. At best it is anecdotal evidence. If I had to do this over again, I would have started these methods earlier to completely avoid the pain.


In September of 2007, Miranda was diagnosed with Ewings Sarcoma, a rare childhood cancer, at the age of 3. Over the next month, we experienced both the horror of cancer and the horror of the treatments used to fight cancer. once we realized that the doctors main, if not sole, job was to kill the tumor, we felt it necessary to research options to alleviate some of the pain of the treatments.

Miranda’s treatment regimen was 7 cycles of two alternating rounds of chemo:

Round 1 (VinCAid)

  • Vincristine (the Vin in VinCAid)
  • Cyclofosfamide (the C)
  • Doxorubicin (Adriamycin, thus the Aid)

Round 2 (IE)

  • Ifosfamide
  • Etoposide

Many of the chemo agents above can cause severe gastro-intestinal problems. In addition to the above, she was given prophylactic doses of antibiotics to keep infection at bay, killing off much of the gut flora.


First, what is mucositis? Mucositis, in simple terms is swelling and the formation of ulcers in the mucous membranes along the digestive tract. Essentially, it is chemo tearing up the lining of the entire digestive tract. One teen boy described it like “getting a potato chip stuck in your throat”, but the pain being thousands of times worse with no way to wash it down.

During Miranda’s first round, she experienced horrible mucositis. So much so, we could not get her to eat. The two most startling symptoms were:

  • A sore that covered almost her entire tongue (imagine a canker sore this size and you get the idea)
  • Vomiting up what appeared to be giant spiders (from ulceration of the throat and stomach)

The doctors gave us mouth wash and pain meds, neither of which seemed to offer much relief.

After the first round, we determined something had to be done and found L-Glutamine. After putting her on L-Glutamine, most of the symptoms disappeared and we certainly never had a round anywhere near as serious as the above.

Our method of choice was mixing L-Glutamine in with her juice and having her drink it. At the time, there was no way to get her to swallow pills, so it was necessary to find something she could drink. You can buy L-Glutamine in powder form and give a scoop once or twice a day to abate symptoms.

The idea that L-Glutamine works on mucositis was noticed by the NCI, who started a clinical trial, which was later withdrawn due to lack of funding. I am sorry to see the trial cancelled as it would be interesting to see if others had the same success we had.

Clostridium Difficile (or C Diff)

C Diff is a opportunistic bacteria that creates a whole host of problems for the cancer patient. You first notice it by its unique odor, which is unmistakable. It is generally combined with very bad bouts of diarrhea. It can be fatal if not treated. The solution to C Diff, from a medical perspective is to give very high doses of strong antibiotics, like Vancomycin and Gentomycin. In Miranda’s case, these led to some rather severe kidney problems. In my book, that makes C Diff a good thing to try to avoid.

We went through three rounds of C Diff early on in the treatment. We then started researching and found that probiotics showed promise against C Diff. C Diff appears to be a rather weak bacteria, compared to the natural gut flora. But it is also opportunistic and grows like a wild fire when the gut flora is killed off. This is common when you use antibiotics as readily as they are used with a child with cancer.

After starting probiotics, we never saw another C Diff infection. We found out later that the main oncology transplant doctor at the hospital prescribes probiotics for his patients.


I don’t have the details on what we used when Miranda had her kidney problems, so I don’t want to write about it here, as incomplete information is useless. We also used a variety of supplements to combat potential future heart problems, including fish oil (anti-inflammatory) and CoQ10 (antioxidant), although we did not use any antioxidants during the actual chemo drip, as doctors state it will render chemo ineffective (there are animal studies that refute this, but none in humans).

Peace and Grace,

Twitter: @gbworld

Saving an XML file from an XML Query in SQL Server

This is a short one that was spurred by the following question:

Any idea how to export an xml document from ms sql 2008 using vb.net into a
folder some where? I am running an SP that generates the xml already.

The answer is divided into a bunch of sections.

Streams and EndPoints

We have to go back to my previous entry on streams and endpoints (see Understanding Data in .NET). If you read it, you remember that endpoints are where data is persisted and streams are devices to get data from one endpoint to another. Examples of endpoints in this question:

  1. SQL Server
  2. XML file on the file system

I am not going to reiterate the entire article, so read it if you need more info.

XML in SQL Server

If you want to play along, download Adventure Works. I am simply using the Address table in the Person schema. I am running SQL Server 2005 on this particular machine, although I have SQL 2008 R2 on my home machine. It works on any of these (SQL 2005, SQL 2008, SQL 2008 R2).

To select, I have a couple of choices with the FOR XML clause. If I like attributes, I might do the following:

Code Snippet
  1. select Top 1 *
  2. from Person.Address
  3. FOR XML PATH(‘Person’), ROOT(‘Doc’)

This produces XML in elements:

    <AddressLine1>1970 Napa Ct.</AddressLine1>

Or I could choose to use XML RAW:

Code Snippet
  1. select TOP 1 *
  2. from Person.Address
  3. FOR XML RAW, ROOT(‘Doc’)

This creates attributes, but uses the generic name “row”:

  <row AddressID="1" AddressLine1="1970 Napa Ct." City="Bothell" StateProvinceID="79"
     PostalCode="98011" rowguid="9AADCB0D-36CF-483F-84D8-585C2D4EC6E9"
     ModifiedDate="1998-01-04T00:00:00" />


Code Snippet
  1. select TOP 1 *
  2. from Person.Address
  3. FOR XML AUTO, ROOT(‘Doc’)

This produces the name Person, which it figures from table name

  <Person.Address AddressID="1" AddressLine1="1970 Napa Ct." City="Bothell" StateProvinceID="79"
      PostalCode="98011" rowguid="9AADCB0D-36CF-483F-84D8-585C2D4EC6E9"
      ModifiedDate="1998-01-04T00:00:00" />

There is also FOR XML EXPLICIT for more complex examples, but I end up having to create the “map” of the elements/attributes to use this format. The method is not important to this post. I just show the following to give an idea of what SQL Server does.

NOTE: Each of these examples has a full XML document. If I merely want snippets I drop the ROOT portion of the statement.

Getting the Data

There is no reason to format this data into objects, so we can pull it with a SqlDataReader. Here is a very simple pattern:

Code Snippet
  1. string connectionString =
  2.     "server=(local);database=AdventureWorks;Integrated Security=SSPI;";
  3. string sql =
  4.     "select Top 1 * from Person.Address FOR XML PATH(‘Person’), ROOT(‘Doc’)";
  5. SqlConnection conn = new SqlConnection(connectionString);
  6. SqlCommand cmd = new SqlCommand(sql, conn);
  7. try
  8. {
  9.     conn.Open();
  10.     SqlDataReader reader = cmd.ExecuteReader();
  11. }
  12. finally
  13. {
  14.     conn.Dispose();
  15. }

The reader now contains the data. Of course, the above snippet does nothing other than sets things up.

Outputting XML

If all we want is a file, then we could do this with a StreamWriter. Something like this:

Code Snippet
  1. string connectionString =
  2.     "server=(local);database=AdventureWorks;Integrated Security=SSPI;";
  3. string sql =
  4.     "select Top 1 * from Person.Address FOR XML PATH(‘Person’), ROOT(‘Doc’)";
  5. SqlConnection conn = new SqlConnection(connectionString);
  6. SqlCommand cmd = new SqlCommand(sql, conn);
  7. try
  8. {
  9.     conn.Open();
  10.     SqlDataReader reader = cmd.ExecuteReader();
  11.     StreamWriter writer = new StreamWriter("C:\projects\test.xml");
  12.     while (reader.Read())
  13.     {
  14.         writer.Write(reader[0]);
  15.     }
  16.     writer.Dispose();
  17. }
  18. finally
  19. {
  20.     conn.Dispose();
  21. }

Fairly straightforward. This will not check the validity of the XML, but we are fairly confident it is valid as is. There are enhancements we could make if we needed to validate the XML, etc. We could also move into other readers and writers.

The main takeaway here is that the reader is effectively streaming data from the database and the writer is taking that stream of data and writing it out. I wish I had a bit more time and could play with this some more, but this solves the problem at hand and alleviates having to explain XML.

Peace and Grace,

Twitter: @gbworld

Win a Copy of Expression Web 3.0

If you are a web designer and would like a chance to win a copy of Expression Web 3.0, my friends at frontpage-to-expression.com are giving away a copy. You can enter here from now until January 15, 2010. They promise another contest around Easter 2010. Good luck!

Peace and Grace,

Twitter: @gbworld

December NNTP Client Bridge Beta Released to Public

Like using the Microsoft forums, but prefer an NNTP client to interact, the NNTP Client Bridge is now available for everyone. The product is still in beta, but it is essentially a proxy client to connect an NNTP client, like Windows Live Mail, to the Microsoft forums. You can download it from Microsoft Connect.

If you have not used the NNTP Client Bridge yet, read this post. You will have to have the client bridge started prior to attempting to post, as it is a proxy. If it is not running, you will end up with a failure.

Peace and Grace,

Twitter: @gbworld

Update on Miranda

For those who do not follow Miranda on her site, but have been asking about her, here is an update.

First, a cute picture taken at the Macy’s breakfast for Make-A-Wish. Here is Miranda with Santa:

Full Page: http://www.flickr.com/photos/20449779@N06/4203735100/in/set-72157622918529097/

As you can see, other than the short hair, she looks like any other five year old girl. And since some mothers do cut their girls hair short, there is not much to show here.

She had her last scans the first week of December and she is still NED (No Evidence of Disease). The next scans are in April, if I remember correctly. We have moved out the schedule a bit, which is nice. one and half years down and three and a half to go. She is now at the point when most children relapse, so the tension will be going down after her April scans.

Here is one of all of the girls:

The Beamer Girls
Full Page: http://www.flickr.com/photos/20449779@N06/4203742546/in/set-72157622918529097/

Peace and Grace,

Twitter: @gbworld
Miranda’s site: http://www.caringbridge.org/visit/mirandabeamer

Application Architecture: The Aspects of Development

This is the first in what is likely to be a long series of entries, as there are many introductory topics that people are missing when it comes to developing quality software. This particular post comes from an idea I have for a book on getting people focused on application architecture (or designing applications, if you like to think in those terms) and development.

At the Nashville .NET User’s Group Christmas party, I got into a conversation with John Kellar about application architecture. He stated that the important things in application architecture can go by the acronym ARP, which stands for Availability, Reliability and Performance. I don’t disagree with John on these, as these are 3 noticeable points of “failure” in applications. By noticeable, I mean these points are areas of concern for end users as they manifest themselves very clearly in the user interface.

I understand what John was saying, and we agree on application architecture to a very large extent. But many may see the P in ARP as a call to ensure every routine runs in the fewest number of milliseconds, no matter how complex the code gets. Or the A as making sure every web app has a web farm of servers supporting it. And this takes us to a very myopic view of the architecture, or design, world.

I have yet to come up with an acronym for the entire picture. When I look at development, I see a wheel, like this:


I like using a wheel as pictured above, as it makes for a nice analogy. Like balancing the tires for a smooth ride, you will have to weight the different aspects for  smooth application development, deployment and maintenance. Improper weighting leads to a bumpy ride. In worst case scenarios, it leads to a flat tire.

Here is a breakdown of what the aspects (of development) on the wheel mean (in alphabetical order, not order of importance – more about order of importance later):

  • Availability: The ability to get to the application to use it.
  • Extensibility: The ability to add new features.
  • Maintainability: The ability to change the code base easily.
  • Performance: The ability to run quickly.
  • Reliability: The ability to reliably serve data and persist alterations to data back to the data store.
  • Scalability: The ability to add additional users without breaking the application
  • Security: Ensuring only properly authenticated users can use the application and only see data they are authorized to see.

There is no one size fits all answer to what you put the greatest focus on. Just like a wheel on a car, you add weight to different parts of the wheel to balance it. Why not focus on everything? The problem is you can’t. Examples:

  • Scalability often requires moving application bits across multiple servers, but server boundaries introduce network lag and decrease performance.
  • Performance often requires very tight algorithms that only advanced developers understand, but complex algorithms make a solution less maintainable as you have to keep advanced developers on staff for maintenance (where they generally DON’T wish to be).

Performance is Overrated

I am possibly going to get in trouble for this statement, but stick with me until you finish this section before firing off comments. First a disqualifier. When I talk about performance being overrated, I am talking two things:

  • In most apps (Google excluded?) you have more than enough horsepower in your system to perform the application with a simple algorithm
  • In choosing performance, if both algorithms are equal in complexity, then performance IS NOT overrrated

The statement is a generalization from my years answering questions in forums. Many, if not most, of the questions in forums relate to one of two types of questions:

  • Can you solve my problem? This generally means “fix the problem the way I am trying to solve it” rather than “fix the problem”
  • Which of these runs faster?

The focus on performance is so heavy that people are often creating very complex algorithms to compare against simple constructs. When someone asks “should I run a binary compare or convert to strings first” I will ask “do the people maintaining the application understand binary comparison?” If yes, then go for the faster algorithm. If no, then you have to weigh if using a binary compare is necessary to achieve your performance goals (in one application I worked on many years ago, the answer was yes, despite the learning curve associated).

I should also state that on a personal level, you should learn complex algorithms. They can set you apart from the competition when you try to get jobs, are going for a raise, etc. I just don’t buy the “we have to use the fastest algorithm” argument in all applications. In fact, I would say I am generally against complexity except when needed.

Here are a couple of things I know to be true:

  • Processors will get faster in the future
  • Memory will get faster in the future
  • The price of  new technology goes down over time, after the R&D expenses are recouped
  • Memory, processors and even servers cost less than rock star developers

There are times when you will have to use complex algorithms, or even go to a C++ (or C) DLL to get maximum performance for you application. When you do, I suggest encapsulating the routine in its own library, where it can be maintained separately from the solution. This is not always possible, but it is a good rule of thumb. And,m if you program for Google, you might be writing complex algorithms most of the time. For the rest of you, weigh out your performance needs.

A quick story. In 1998, I worked for a company as a web developer (title: webmaster). The company was trying to determine the direction to go with development and I was asked to put together a brief presentation on the route I would suggest. I chose Visual Basic as the direction. My reasoning:

  • There were tons of VB developers in Nashville (resource intensive)
  • The cost of VB developers was lower than other languages (except perhaps mainframe programmers)
  • The VB skillset translated well to ASP, which was our preferred web framework

When I presented the findings, one developer fought for C++. His reasoning? Performance. The stalemate was solved by paying a consulting company about $100,000 to determine the best course of action. They recommended VB. While I did not think of it at the time, here was our situation:



  • The red line is the slowest the app could run.
  • The green line was the max we felt the app would need to scale over the next year (very aggressive number)
  • The C++ figures were extrapolated from an app we migrated from C++ to ASP, so they are not necessarily accurate

As you can see, C++ performed much better. From a sheer performance aspect, it is a logical choice. But when we look at requirements of the application, C++ was not worth the cost, as the VB application performed well under the bar for the next year. Realize also, that we could have chosen a beefier application server or a web farm before choosing C++ and incurring the extra expense for developers.

Now, there are times when an algorithm choice is sane, even if you are not focused primarily on performance. Consider the following:

Code Snippet
  1. public int Multiply(int a, int b)
  2. {
  3.     int returnValue = 0;
  4.     for (int i = 0; i < b; i++)
  5.     {
  6.         returnValue += a;
  7.     }
  8.     return returnValue;
  9. }
  10. public int Multiply(int a, int b)
  11. {
  12.     return a * b;
  13. }

The second algorithm is superior to the first, both on a performance standpoint and a maintainability standpoint. I would not choose the first, even though it solves the problem. But if I had to choose between lambda expressions and LINQ, I might default to LINQ, despite a slight performance penalty, as it is easier to teach LINQ than lambda expressions. Perhaps not the best example, as I love lambdas, but I think you get the point.

DEVELOPER NOTE: The difference between the two algorithms is largely based on the size of the number. If you are dealing with small numbers, there is absolutely no difference between the two algorithms, so the choice is largely a maintainability choice (and perhaps reliability, as complex algorithms are more error prone).

The Most Important Aspect

As I mention above, the application determines where you put the weight(s) to balance the wheel for a smooth ride. But, if I had to choose one aspect to focus on, it would generally be maintainability. Maintainability costs business more than any of the other aspects. This goes back to “programmers cost more than servers”, but also deals with the fact that an application spends more of its lifecycle in maintenance than development.

To understand the mindset of the majority, let’s look at a statement I have heard many times in my career (this is a paraphrase, the words change):

A simple problem has a simple solution
A complex problem has a complex solution

I see this a bit differently, as I tend to start with maintainability and then adjust based on the application needs:

A simple problem generally has a simple solution
A complex problem generally has one or more simple solution(s)

Does this mean I never write complex code that requires a rock star to maintain it? Certainly not. There are times that require very complex algorithms to fulfill the requirements of the application. In general, however, highly complex algorithms are not necessary.

Back to The Aspects

When I mentioned John’s concept of ARP, I said these were external concerns, or those related to the user interface. Here is a brief breakdown of all of the aspects when viewed from different people. I am using the typical concerns in the order I see them and not necessarily the order I feel they should be placed in.


The main takeaway here is ARP is very important in holding and keeping customers. A site that is unavailable, unreliable or slow must be extremely compelling to keep users using it. If it is an internal application, you have a bit of a fudge factor, as users HAVE to use the application, but these concerns should be weighed in. On the other hand, you need to worry about other aspects as well. As the architect, you have to think of all of these four groups and weigh which concerns are the most important. You never neglect any of the aspects, but like truing a tire, you weight them properly for a smooth ride.


Here are a few things you should get from this blog entry:

  • There are many aspects to development
  • Each application is different, so you have to weight the aspects of development for each application.
  • Keep the skill set of the maintenance crew in mind as part of your weighing effort
  • Your personal focus will often depend on which role you are playing at the time, but be mindful of other roles (esp. as an architect)
  • Rule of thumb: Only go as complex as necessary to solve the problem projected out for the lifetime of the servers invovled
  • Rule of thumb: When in doubt err on the side of maintainability – this will buy you more from management in the long run, although you often have to summarize the ROI to get the kudos
  • Key point: Improper weighting of development aspects leads to a bumpy ride.

Hope this helps.

Peace and Grace,

Twitter: @gbworld

Future Topics Planned (will add links to topics when done):

  • What is an Application?
  • What are the Best Requirements Documents?
  • The Software Development Lifecycle
  • What Development Methodology Should I Use?

Cloak and Dagger Health Care (HR 3590)

The Senate agreed at 1 AM to close the debate on HR 3590, with a schedule of passing the legislation on Christmas Eve. The bill passed on party lines, and was negotiated with the Republicans locked out of the room. Many news outlets herald this bill as a compromise bill, but is there really any compromise when only one side is sitting at the table?

Unfortunately, there is no unified document for the full 2400 page bill yet, and I doubt the American public will see the legislation before it passes the Senate. There are a few things known about the bill, or at least those things that have not been amended according to my reading of the amendments.

  • There is no public option (at least not a true government run option)
  • The Congress will not have to take the same health care being forced on us
  • The CBO estimates are worthless, as the Medicare reductions for payments have been stripped from the bill. There is no update on the true costs of the bill, nor will there be until after it passes.
  • Great strides were made to make provisions to key Democratic and Independent senators to agree to cloture
    • Louisiana gets an additional $100 to $300 million for Medicaid (prior to the November 20th vote)
    • Vermont and Massachusetts will get over a billion dollars in additional Medicaid funds
    • Montana, South Dakota, North Dakota, Utah and Wyoming get new, higher reimbursement rates for doctors taking Medicare. In addition, Montana gets Medicaid for mine workers in Libby, MT.
    • Florida, New York and Pennsylvania got guarantees on no cuts for their seniors on Medicare
    • Nebraska gets permanent federal aid for the expanded Medicaid provisions created in the state
  • Health Insurance premiums will rise tremendously next year, as “fines” (fees) to the Insurance industry are enacted for all policies sold after December 31, 2008. The “fines” amount to about 10% of the profit of the health insurance industry, meaning 20% in a single year (as 2009 has already been paid for). Interesting that many people will experience this spike about the same time as their COBRA subsidies end. The CBO estimates the cost of health care to be lower by 2016, but how many will feel the pinch prior to this time?
  • Medical costs will increase with new “fines” (fees) on medical equipment manufacturers and drug companies. These are also retroactive to all devices/drugs sold after December 31, 2008.
  • The average American will receive no benefits from this legislation until January 1, 2014.

I can’t help but agree with McConnell, who stated “Make no mistake: If the people who wrote this bill were proud of it, they wouldn’t be forcing this vote in the dead of night”. It seems almost like a sick spy novel when Washington is doing its major business at times when the public cannot contact their elected officials and tell them what they think about it.

Reid plans on having the Senate pass the bill by Christmas Eve, even if it requires keeping Congress in session all night. When I see this bill as my Christmas gift from the government, I wonder what I did so bad that I have been put on the naughty list.

Peace and Grace,

Twitter: @gbworld

Caroline Pryce Walker Conquer Childhood Cancer Act Funding Update

If you have not seen my earlier post on the Caroline Pryce Walker Conquer Childhood Act (entitled The Caroline Pryce Walker Conquer Childhood Shill Game), this post is a follow up for that one.

The latest news for 2010 is as follows, according to Cure Search

  • $3 million for the CDC to set up a childhood cancer registry
  • $1 for HHS to provide outreach, resource and program services for children with cancer and their families
  • $1.6 million for pediatric cancer research in the defense appropriation bill (not sure why it is here)
  • $4 extra in the NCI budget for childhood cancer


$30,000,000 promised
$  9,600,000 delivered (32%)
$20,400,000 short (68%)

Funding expected to date

$60,000,000 promised
$  9,600,000 delivered (16%)
$50,400,000 short (84%)


I wonder if we will have to have someone in Washington fighting each year to get a small portion of what was promised? If you go back to my earlier post, you see this quote from the appropriations committee:

The National Cancer Institute reports that it is meeting the funding
level identified for pediatric cancer research in the Caroline Pryce
Walker Conquer Childhood Cancer Act of 2008 within its base budget. The
conferees commend NCI for its attention to this issue.

It takes government math to make $4 million equal $26 million (considering the defense appropriation bill was not included at the time the statement was made). And you want to hand over your lives to these people?

Peace and Grace,

Twitter: @gbworld