Microservices in .NET part 2: Silver Bullets and Free Lunches?


I have spent the better part of this week digging into micro services and I love the idea. Here are some benefits I see that can be realized by using a microservices approach:

  1. The granularity level allows developers to stay on a single context while solving a problem. This singularity of focus makes it much easier to dig into the details of a specific object or set of objects. In many cases, it will quickly expose poor planning in an organization and provide rationale for fixing the process. As an example, a service that has a lot of churn is probably one that was not planned out well (not talking finding additional uses, but rather having to rebuild contracts and re-architect the service on a regular basis)
  2. The services are simple,making it easy to maintain the individual components.
  3. The methodology forces a good separation of concerns.
  4. You can use the best tool for the job rather than stick to a single platform, programming language or paradigm, etc. This is a dual edged sword, as I will uncover a bit later.
  5. Isolated solution problems can easily be fixed without much impact. If you find your employee microservice has an issue, you can fix it without deploying the entire solution.
  6. Working with multiple services enables the use of concepts like Continuous Integration (CI) and Continuous Delivery (CD). This is also dual edged, as you almost have to go to a full blown CD implementation to use microservices architectures. I will hit this later, as well.
  7. You can get multiple teams working independently of each other. This was always possible, of course, as I have pointed out in my Core As Application blog entries (one here), if you will take the time to plan out you contracts and domain models first. (NOTE: In 2010, I was told “you cannot design contracts first, as you don’t know all of the requirements up front”. By 2011, I proved this wrong by delivering using a contract first approach, both ahead of time and under budget – a bit of planning goes a long way).
  8. Systems are loosely coupled and highly cohesive.

This is just a short list of the benefits. The problem I see is everyone is focusing on the benefits as if we have finally found the silver bullet (do you have werewolves in your organization?) and gained a free lunch. This article focuses on some of the downsides to microservices.

DevOps

As you move to smaller and smaller services, there are many more parts that have to be deployed. In order to keep the solutions using the microservices up and running, you have to be able to push the services out to the correct location (URI?) so they can be contacted properly by the solutions using them. If you go to the nth degree, you could conceivably have tens, if not hundreds, of small services running in an Enterprise.

As each service is meant to be autonomous, this means you have to come up with a strategy for deployment for each. You also have to plan for high availability and failover. And there has to be a solid monitoring and instrumentation strategy in place. In short, you need all of the pieces of a good API Management strategy in place, and you need to do this BEFORE you start implementing microservices. And I have not even started focusing on how everything is wired together and load balancing of your solutions. On the plus side, once you solve this problem, you can tune each service independently.

There is a burden on the Dev side that needs to be tackled up front, as well. You need to start thinking about the requirements for monitoring, tracing and instrumenting the code base and ensure it is part of the template for every service. And you have to plan for failure, which is another topic.

As a final point on this topic, your dev and ops team(s) must be proficient in the combined concept of DevOps to have this be a success. Developers can no longer pitch items over to Ops with a deployment document. They have to be involved in joint discussions and help come up with plans for the individual services, as well as the bigger solutions.

Planning for Failure and Avoiding Failure

Services will fail. Looking at the plethora of articles on microservices, it is suggested you use patterns like circuit breakers (avoid hitting a failed service after a few attempts) and bulkheads (when enough “compartments” are “under water”, seal the rest of the solution from the failure point). This is a fine avoidance strategy, but what if the service failing is a critical component to the solution working?

Not mentioned in the articles I have read is a means of managing services after failure. Re-deploying is an option, and you can make redeployment easier using quickly set up virtual environments and/or containers, but what if reaching that portion of the network is the point of failure. I would love to hear comments on this next idea: Why not look at some form of registry for the services (part of API Management, similar to UDDI, etc.) or a master finder service that exists in various locations and that all applications are aware of. Another idea would be to include as part of the hyper-media specification backup service locations. But either of these solution further exacerbates the reliance on DevOps, creating even more need for planning solutions and monitoring released solutions.

I don’t see microservices working well without a good CI/CD strategy in place and some form of API management. The more I look into microservices the more I see the need for a system that can discover its various points on the fly (which leads me back to the ideas of using a finder service or utilizing hypermedia to inform the solutions utilizing microservices where other nodes exist).

Contracts and Versioning

When you develop applications as a single Visual Studio solution (thinking in terms of projects and not products?), you have the ability to change contracts as needed. After all, you have all of the code sitting in front of you, right? When you switch to an internal focus on services as released products, you can’t switch out contracts as easily. You have to come up with a versioning strategy.

I was in a conversation a few weeks ago where we discussed versioning. It was easy to see how URI changes for REST services required versioning, but one person disagreed when I stated changes to the objects you expose should be a reason for versioning in many instances. The answer was “we are using JSON, so it will not break the clients if you change the objects”. I think this topic deserves a side bar.

While it is true JSON allows a lot of leeway in reorganizing objects without physically breaking the client(s) using the service, there is also a concept of logical breakage. Adding a new property is generally less of a problem, unless that new element is critical for the microservice. Changing a property may also not cause breakage up front. As an example, you change from an int to a long as planning for the future. As long as the values do not exceed the greatest value for an int, there is no breakage on a client using an int on their version of the object. The issue here is it may be months or even years before a client breaks. And finding and fixing this particular breakage could be extremely difficult and lead to long down times.

There are going to be times when contract changes are necessary. In these cases, you will have to plan out the final end game, which will include both the client(s) and service(s) utilizing the new contract, as well as transitional architectures to get to the end game without introducing a “big bang” approach (which microservices are said to help us avoid). In short, you have to treat microservice changes the same way you approach changes on an external API (as a product). Here is a simple path for a minor change.

  1. Add a second version of the contract to the microservice and deploy (do not remove the earlier version at this time)
  2. Inform all service users the old contract is set to deprecate and create a reasonable schedule in conjunction with the consumers of the microservice
  3. Update the clients to use the new contract
  4. When all clients have updated, retire the old version of  the contract

This is standard operating procedure in external APIs, but not something most people think about a huge deal when the APIs are internal.

I am going to go back to a point I have made before. Planning is critical when working with small products like microservices. To avoid regular contract breakages, your architects and Subject Matter Experts (SMEs) need to make sure the big picture is outlined before heading down this path. And the plan has to be conveyed to the development teams, especially the leads. Development should be focused on their service when building, but there has to be someone minding the shop to ensure the contracts developed are not too restrictive based on the business needs for the solutions created from the services.

Duplication of Efforts

In theory, this will not happen in microservices, as we have the individual services focusing on single concerns. And, if we can imaging a world where every single class had a service (microservices to the extreme?) we can envision this, at least in theory. But should we break down to that granlular a level? I want to answer that question first.

In Martin Fowlers article, he talks about the Domain Driven Design (DDD) concept of a bounded context. A bounded context is a grouping of required state and behavior for a particular domain. Martin Fowler uses the following diagram to show two bounded contexts.

In the diagram above, you see some duplication in the bounded contexts in the form of duplication of customer and product. In a microservices architecture, you could conceivably move customer and product to its own service and avoid the duplication, but moving a concept out simply to avoid duplication is not the best motivation in all cases. If you can also make a customer or product business capability, I would wholeheartedly support this approach, but it is not always the case (another side bar).

When would you not want to separate out Customer and Product? In short, when the domain concept of these objects is different. In the sales context, a customer contains sales specific information, including terms of sale (net 60 days?) and other items that may not exist in a support context. If we are talking a company that ships products (as opposed to a service only company), we can add other contexts, like shipping and warehousing, that have radically different customer views. In the warehouse, a customer is completely unimportant, as they are focused on pulling orders. From a shipping standpoint, a customer is a name, a shipping address and a phone number. No need for any additional information. A customer microservice either spits out a complete object, allowing the services to filter (not a great idea from a security standpoint) or it provides multiple interfaces for each of the clients (duplication of efforts, but in a single service rather than multiple consumers and/or services, so it does not avoid duplication). A product can also be radically different in each of these contexts.

My advice to starting out is starting with bigger contexts and then decomposing as needed. The original “microservice” can act as an aggregator as you move to more granular approaches. Example of transitional states from the contexts above.

  1. Discover of duplication in the sales and support microservices leads to a decision the customer and product should be separate services
  2. New customer and product services created
  3. Sales and support services altered to use the new product and customer services
  4. new version of sales and support services created to avoid serving product and customer information
  5. Clients altered to use the new services as well as the sales and support services

This is one idea of migration, as we will discover in the next section.

Where do we Aggregate?

If we go back to the bounded context discussion in the last section, we see the need to aggregate. The question is where to we aggregate? You need to come up with a strategy for handling aggregation of information. I am still groking this, so I am not offering a solution at this time. Here are some options I can see.

Option 1 – Client: In a full microservices architecture, the client may be responsible for all aggregation. But what if the user’s client is a mobile application. The chattiness of a microservice architecture is hard enough to control across your internal multi-GB network infrastructure. Moving this out onto the Internet and cell networks expounds the latency. I am not saying this is a bad option in all cases, but if you opt for this approach, more focus on the latency issue is required from your mobile development team. On a positive note, if the client application can handle single service failures gracefully, you reduce the likelihood of a single point of failure.

Option 2 – Final service boundary: In this approach, the outermost service contacts the many microservices it requires to get work done and aggregates for the client. I find this more appealing, in general, for mobile clients. And it reduces the number of “proxies” required for web, simplifying the user interface client. As a negative, it creates a single point of failure that has to be handled.

Option 3 – Aggregation of dependencies: In this approach, the higher level service (closer to the client) aggregates what it requires to work for the client. At first, I liked this option the best, as it fits a SOA approach, but the more and more I read about the microservices idea, the more I see this as a potential combination of the bad points of the first two options, as you introduce numerous points of failure on the aggregate level while still potentially creating multiple points for latency in your client applications. I still think this might be something we can think through, so I am providing it.

If you can think of other options, feel free to add them in the comments.

Testing One, Two, Three

I won’t spend a lot of time on testability, but the more moving parts you have to test, the harder it is. To understand why this is so, create an application fully covered in unit tests at every level, but developed by different teams, and then integrate. The need for integration testing becomes very clear at this moment. And what if you are not only integrating multiple libraries, but multiple discrete, and very small, services. There is a lot of discipline.

I find the only reasonable answer is to have a full suite of unit tests and integration tests, as well as other forms of testing. To keep with the idea of Continuous Integration, only the smaller tests (unit tests) will be fired off with each CI build, but there will be a step in the CD cycle that exercises the full suite.

There is also a discipline change that has to occur (perhaps you do this already, but I find most people DON’T): You must now treat every defect as something that requires a test. You write the test before the fix to verify the bug. If you can’t verify the bug, you need to keep writing before you solve it. Solving something that is not verified is really “not solving” the problem. You may luck out … but then again, you may not.

Summary

There are no werewolves, so there are no silver bullets. There is no such concept as a free lunch. Don’t run around with hammers looking for nails.The point here is microservices is one approach, but don’t assume it comes without any costs.

As a person who has focused on external APIs for various companies (start ups all the way to Fortune 50 companies), I love the idea of taking the same concepts inside the Enterprise. I am also intrigued by the idea of introducing more granularity into solutions, as it “forces” the separation of concerns (something I find so many development shops are bad at). But I also see some potential gotchas when you go to Microservices.

Here are a few suggestions I would have at this point in time:

  1. Plan out your microservices strategy and architecture as if you were exposing every service to the public. Thinking this way pushes you to figure out deployment and versioning as a product rather than a component in a system.
  2. Think about solving issues up front. Figure out how you are going to monitor your plethora of services to find problems before they become huge issues (downtime outside of SLAs, etc). Put together a disaster recovery plan, as well as a plan to failover when you can’t bring a service back up on a particular node.
  3. In like mind, plan out your deployment strategy and API management up front. If you are not into CI and CD, plan to get there, as manually pushing out microservices is a recipe for disaster.
  4. Create a template for your microservices that includes any pieces needed for logging, monitoring, tracing, etc. Get every developer in the organization to use the template when creating new microservices. These plumbing issues should not requiring solving again and again.

Peace and Grace,
Greg

Twitter: @gbworld

Troubleshooting: The Lost Skill?


This blog entry comes from an email I received this morning asking me to “check in files”, as my failure to do so was causing a work stoppage. After a very short examination, I found the following:

  1. I had no files checked out (absolutely none)
  2. The problem was a permissions problem, not a check out problem, or the person who could not check in their files was not being stopped by my failure to act, but by someone else’s incorrect granting of permissions
  3. I had no permissions to solve the problem (i.e. grant the correct permissions

Further investigation of the problem would have revealed it was a permissions issue. In this case, the only consequence is another day lost of productivity, and a wonderful opportunity to learn. In some cases, the consequences are more dire. Consider, for example, Jack Welsh, the CEO of GE.

Jack made an assumption and ended up destroying a manufacturing plant. In one telling of Jack Welsh’s story, the dialog goes something like this:

Jack: Aren’t you going to fire me now?
Manager: Fire you? I just spent 2 million dollars training you.

Considering Jack Welch is now one of the most successful executives of all time, it is good his manager was able to troubleshoot the aftermath to a problem Jack had worked through on assumption. The point is plain: When we don’t troubleshoot a problem, we go on assumptions. In the email I received this morning, there was an assumption I had files checked out. Rather than test the assumption, work stopped.

Tony Robbins tells a story in his Personal Power program about a suit of armor. As he is walking on stage, every time he moves close to a suit of armor there is feedback. The audience eventually starts screaming at him “it’s the armor”. But he continues to stand near the armor and the feedback eventually goes away. He then moves away and the feedback comes back. Turns out there was a fire on a very close frequency and the messages were interfering with the microphone.

Personally, I think the above story is a myth, as I know the FCC is very careful on doling out bands and it is unlikely a microphone has the same band as emergency services. But this is also an assumption, and proper troubleshooting would have me examining the issue.

The path of least resistance

On Facebook … nay, on the Internet as a whole, a large majority of items are written out of assumptions or biases, and not an examination of the facts. For most people, whether you agree with Obamacare or not is not an exercise in examining the facts completely and then drawing conclusions. Instead, a quick sniff test is done to determine if you feel something smells, and then action is taken.

Let’s take an example. In 2006, the news media reported on the Duke Lacrosse team raping an African American stripper. The case seemed open and shut, as the evidence piled up. Duke University suspended the team and when the lacrosse coach refused to resign, Duke’s president cancelled the rest of the season. The case seemed so open and shut, Nancy Grace (CNN) was suggesting the team should be castrated.

When the assumptions were removed, a completely different story was told. Not only was the evidence thin, much of it was manufactured. The District Attorney, Ray Nifong, was disbarred and thrown in jail for contempt of court.

We can also look at the George Zimmerman case, where the initial wave of “evidence” painted another “open and shut” case. But the “open and shut” case, based on assumptions, began to crumble when it was discovered ABC edited the 911 tape to paint Zimmerman as a racist and carefully choose the video and picture evidence to paint a picture of a man that had no wounds and was the obvious aggressor.

The point here is not to rehash these cases, but to point out that assumptions can lead to incorrect conclusions. Some of these assumptions may lead to dire consequences, while most just result in a less than optimal solution.

Now to the title of the section: The path of least resistance.

When we look at the natural world, things take the path of least resistance. Water tends to travel downhill, eroding the softest soil. Plants find the most optimal path to the sunlight, even if it makes them crooked. Buffalos would rather roar at each other to establish dominance than fight, as the fighting takes precious energy. And humans look for the least amount of effort to produce a result.

Let’s pop back to Obamacare, or the PPACA (Patient Protection and Affordable Care Act), as it illustrates this point. Pretty much everyone I encounter has an opinion on the subject. In fact, you probably have an opinion. But is the opinion based on assumption? You might be inclined to say no, but have you actually read the bill? If not, then you are working on distillations of the bill, most likely filter through the sites you like to visit on a regular basis. And, more than likely, you have chosen these sites as they tend to fit your own biases.

I am not deriding you on this choice. I only want you to realize this choice is based more on assumptions than troubleshooting. Troubleshooting takes some effort. In most cases, not as much as reading a 900+ page bill (boring) or many more thousands of pages of DHS rules (even more boring). But, by not doing this, your opinion is likely based on incomplete, and perhaps improper, facts.

Answering Questions

I see questions all the time. Inside our organization, I see questions for the Microsoft Center of Excellence (or MSCOE). I have also spent years answering online questions in forums. The general path is:

  1. Person encounters problem
  2. Person assumes solution
  3. Person asks, on the MSCOE list, to help with the assumed solution – In general, the question is “How do I wash a walrus” type of question rather than one with proper background of the actual business problem and any steps (including code) taken to attempt to solve it
  4. Respondent answers how to solve the problem, based on their own assumptions, rather than using troubleshooting skills and asking questions to ensure they understand the problem
  5. Assumed: Person implements solution – While the solution may be inferior, this is also “path of least resistance” and safe. If the solution fails, they have the “expert” to blame for the problem (job security?). If it succeeds, they appear to have the proper troubleshooting skills. And very little effort expended.

What is interesting is how many times I have found the answer to be wrong when the actual business problem is examined. Here are some observations.

  • The original poster, not taking time to troubleshoot, makes an assumption on the solution (path of least resistance)
  • Respondent, taking path of least resistance, answers the question with links to people solving the problem posted
  • If the original poster had used troubleshooting skills, rather than assumptions, he would have thrown out other possibilities, included all relevant information to help others help him troubleshoot, and would have expressed the actual business problem
  • If the respondent had use troubleshooting skills, rather than assumptions (primarily the assumption the poster had used troubleshooting skills), he would have asked questions before giving answers.

To illustrate this, I once saw a post similar to the following on a Microsoft forum (meaning this is paraphrased from memory).

Can anybody help me. We have a site that has been working for years in IIS 4. We recently upgraded to Windows Server 2008 and the site is no longer serving up asp.net files located at C:\MyFiles. I just hate Microsoft right now, as I am sure it is something they changed in windows. I need to get this site up today, and f-ing Microsoft wants to charge for technical support.

The first answers dealt with how to solve the problem by turning off the feature in IIS that stops the web server from serving files outside of the web directory structure. While this was a solution, troubleshooting the problem would have shown it was a bad solution.

Imagine the user had written this instead.

We recently upgraded to Windows Server 2008 and the site is no longer serving up asp.net files located at C:\Windows\System32.

Turn off the feature in IIS would have still solve the problem, but there is now an open path directly to the system for hackers. And, if this is the way the person implements the solution, there is likely some other problems in the code base that will allow the exploit.

The proper troubleshooting would have been to first determine why ASP.NET files were being served from C:\MyFiles instead of IIS directories. Long story, as the reason had to do with an assumption developing on a developer box generally, perhaps always, led to development of sites that did not work in production. So every developer was working on a production server directly. The C:\MyFiles was created from an improper assumption about security, that is was more secure to have developers working from a share than an IIS directory. This led to kludges to make the files work, which failed once the site was moved to a server with a version of IIS that stopped file and folder traversing. This was done as a security provision, as hackers had learned to put in a URL like:

http://mysite.com/../../../Windows/System32/cmd.exe%20;

Or similar. I don’t have the actual syntax above, but it was similar to above and it worked. So IIS stopped you from using files outside of IIS folders. Problem solved.

Now, there are multiple “solutions” to the posters problem:

  • Turn off the IIS feature and allow traversing of directories. This makes the site work again, but also leaves a security hole.
  • Go into IIS and add the folder C:\MyFiles folder as a virtual folder. This is a better short term solution than the one above. I say short term, as there is some administrative overhead to this solution that is not needed in this particular case.
  • Educate the organization on the proper way to set up development. This is not the path of least resistance, but a necessary step to get the organization on the right path. This will more than likely involve solving the original problem that created the string of kludges that ended with a post blaming Microsoft for bringing a site down.

Troubleshooting The Original Problem

I am going to use the original “Check in your files” problem to illustrate troubleshooting. The formula is general enough you can tailor it to your use, but I am using specifics.

First, create a hypothesis.

I cannot check in the files, so I hypothesize Greg has them checked out.

Next, try to disprove the hypothesis. This is done by attempting to find checked out files. In this case, the hypothesis would have easily been destroyed by examining the files and find out none were checked out.

Graphic1

So the next step would be to set up another hypothesis. But let’s assume we found this file as “checked out”. The next step is to look at the person who has the file checked out to ensure the problem is “Greg has the file checked out” and not “someone has the file checked out”.

Graphic2

Since the name Greg Beamer is not here, even if the file were checked out, he cannot solve the problem.

Next, even if you have a possible solution, make sure you eliminate other potential issues. In this case, let’s assume only some of the files were checked out when examined, but the user was still having problems uploading. What else can cause this issue.

Here is what I did.

  1. Assume I do have things checked out first, as it is a possible reason for the problem. When that failed, look at the user’s permissions on the files in question. I found this:
    image
  2. Hypothesis: User does not have proper permissions. Attempted solution: Grant permissions
  3. Found out permissions were inherited, so it was not a good idea to grant at the page level. Move up to the site level required opening in SharePoint online, where I find the same permissions.
    image
  4. Now, my inclination is to grant permissions myself, but I noticed something else.
    image

    which leads to this
    ecGraphic4

    which further led to this (looking at Site Collection Images):
    image

The permissions here are completely different. The user is not restricted, so he can access these.

I did try to give permissions to solve the issue:
Graphic3

But end up with incomplete results:

image

I further got rid of the special permissions on some folders, as they were not needed. More than likely added to give the developer rights to those folders. I still have the above, however, which means someone more skilled needs to solve the problem.

The point here is numerous issues were found, none of which were the original hypothesis, which was reached via an assumption. The assumption was

I cannot check in, therefore I assume someone has the files checked out. Since Greg is the only other person I know working on the files, I assume he has them checked out.

Both assumptions were incorrect. But that is not the main point. The main point is even if they were correct, are there any other issues. As illustrated, there were numerous issues that needed to be solved.

Summary

Troubleshooting is a scientific endeavor. Like any experiment, you have to state the problem first. If you don’t understand a problem, you can’t solve it..

You then have to form a hypothesis. If it fails, you have to do over, perhaps even redefining the problem. You do this until you find a hypothesis that works.

After you solve the problem, you should look at other causes. Why? Because you either a) may not have the best solution and b) you may still have other issues. This is a step that is missed more often than not, especially by junior IT staff.

Let me end with a story on the importance of troubleshooting:

Almost everyone I know that has the CCIE certification took two tries to get it. If you don’t know what CCIE is, it is the Cisco Certified Internetwork Engineer certification. It is considered one of the most coveted certifications and one of the hardest to attain. The reason is you have to troubleshoot rather than simply solve the problem.

The certification is in two parts. A written exam, which most people pass the first time, and a practical exercise, which most fail. The practical exercise takes place over a day and has two parts:

  1. You have to set up a network in a lab according to specifications given at the beginning of the day.
  2. After lunch, you come back to find something not working and have to troubleshoot the problem

Many of the people I know that failed the first time solved the problem and got the network working.So why did they fail? They went on assumptions based on problems they had solved in the past rather than worked through a checklist of troubleshooting steps. Talking to one of my CCIE friends, he explained it this way (paraphrased, of course):

When you simply solve the problem, you may get things working, but you may also end up taking shortcuts that cause other problems down the line. Sometimes these further problems are more expensive than the original problem, especially if they deal with security.

Sound similar to a problem described in this blog entry? Just turn off IIS directory traversing and the site works. Both the poster and the hacker say thanks.

Please note there are times when the short solution is required, even if it is not the best. There are time constraints that dictate a less than optimal approach. Realize, however, this is technical debt, which will eventually have to be paid. When you do not take the time to troubleshoot, and run on assumption, build in time to go back through the problem later, when the crisis is over. That, of course, is a topic for another day.

Peace and Grace,
Greg

Twitter: @gbworld

Installing the Kanban Process Template


I am working on material on using Kanban in TFS, largely to examine its feasibility in a project I am working on at this time. TFS 2012 already has some Kanban visualizations and a board, if you install Update 1 (Visual Studio 2012 Update 1 can be downloaded here). Please note that TFS 2012 is a full install package. Both can also be pulled down from MSDN Subscriptions.

I will follow this blog entry with other entries on Kanban in VIsual Studio and TFS (primarily 2012).

Connect to Team Foundation Server

If you are already running off TFS, this step will already be completed. Click on the TEAM menu and then Connect to Team Foundation Server.

image

For your local box, the option should already show up. If not, the default is http://{machinename}:8080/tfs. This brings up a dialog where you can select a server. If you have to add, click the servers button and add using the format above.

image

Click connect.

Add the Process Template

In this case, I am adding the template from the Visual Studio ALM Rangers. It is found at http://vsarkanbanguide.codeplex.com/. You download the Rangers_vsarkanbanguide_vs2012_AllSamples.zip package (or the everything package) and unzip the process template(s). There is one for VS 2010 and one for VS 2012.

To install, you need to open the package manager. Here are the instructions in VS 2012, but the screens are the same in VS 2010.

First, open the package manager by clicking on TEAM, then Team Project Collection Settings and then Process Template Manager.

image

This brings up the manager:

image

You will then click Upload button and choose the folder where the ProcessTemplate.xml file is located. \

image

Click Select folder and it should start installing.

image

You will get this message when it is complete.

image

And Microsoft Kanban 1.0 will be installed.

image

And that is about all it takes. You can now create a new team project.

Create a Team Project

To create a team project, there are some differences in VS 2012 and 2010. In 2012, the Team Explorer automatically pops to the front when you attach to TFS with the option of creating a new project up front. You can then click on Create New Team Project and start the wizard.

image

In VS 2010, you have to right click on the top team explorer node and choose to create a project:

image

The wizard asks you to name your project.

image

Click next and choose the Microsoft Kanban 1.0 template:
image

If you are using Visual Studio 2010 and TFS 2012, you will have to patch 2010 to work with TFS 2012. The Update is located here. If you are using TFS 2010, make sure you install the Dev 10 version of the template and you will be fine.

But Doesn’t Microsoft Have Kanban Now?

Glad you asked. Yes, there is a Kanban board in TFS 2012, if you have installed Update 1. It is part of the standard MSF for Agile Software Development Template. Not sure whether or not it can be used in the Scrum template. Either way, you will have to use the team project portal to use all of the features of the template. I will cover both these features and the ALM Rangers template in future posts.

Summary

This is a brief intro to Kanban in TFS focused on installing the template from VS ALM Rangers. Keep in touch and I will cover more of the features and how to implement them in a real world scenario. The best way to keep notified is to follow me on twitter, as I post my blog entries there.

Peace and Grace,
Greg

Twitter: @gbworld

Session and Cookies in ASP.NET MVC? Oh my!


This post is brought to you by this post on LinkedIn (you may need to belong to the ASP.NET MVC group). The question of the day:

In MVC it bad practice to use session & cookies then what are the options to maintain session

If you understand how HTTP works, you understand this is a bad question, or at least one born out of ignorance.

Before getting into the meat, let’s examine two important concepts: synchronicity and state.

Synchronicity is whether or not we immediately get feedback. For example, if I fill out a form to join and immediately get logged on, it is a synchronous system. If, instead, I fill out the form and get a email with a link and then log in, it is asynchronous. A better example is a chat application (synchronous) versus email (asynchronous) or the difference between the chat window in Facebook (synchronous) versus either sending a Facebook message, playing words with friends or using Twitter (asynchronous).

Now to state. State is the current “data” stored about you, or something else, in the system. Oversimplification, but consider I have $1000 in the bank. The state of my account is $1000 on the positive side. If I then get $100 cash, the state is $900 in the bank. The same is true if I change my Facebook status to single (not my real status, of course, but it would be my state in the system).

How applications work (and some history)

The first applications were normally either dumb terminals (maximum statefulness and synchronicity) or client/server applications (high level of statefulness and synchronicity). When you logged in, you were connected full time to the server, so any changes were instantly reflected. Not 100% true, but it covers the general nature.

Now, let’s move up to the web. The client is not connected to the server except when making a request, making it an asynchronous and stateless system. This may take a moment to sink in, but I feel the need to disgress (ADHD popping up?) and cover how we got here.

I first got on the Internet in the late 80s through shell programs on a BBS (bulletin board system). At this point in time, you had email and newsgroups as your primary means of communication with other people. To organize information, you had Gopher. Gopher was a decent system for putting “books” on the Internet, but a bad system for user interaction.

One more step into ADHD land? :: The government (and education) started what would become the “Internet” in the late 1960s as ARPANET. They brought education in, and ultimately other countries (our allies), in the 1970s.

Now, let’s cover the WWW. Gopher was a bit of a pain to program with. But that changed when Tim Bernas-Lee took a government standard (SGML, or Standard Generalized Markup Language) and created HTML (HyperText Markup Language). The main push was to make it easier to disseminate information, but it was easy enough to create applications that were interactive. I could go deeper (will consider in another post) and get into what HyperText truly means (perhaps a post on REST?).

Now, you have a bit of background, let’s get to the core. Hypertext applications are both stateless and asynchronous. The operations in these types of applications may be synchronous (added to avoid arguments from purists), but the basic nature of the communication is asynchronous, as you

Hypertext applications work on a request and response system. You create a request and send it to the server. Prior to your request coming in, the server has no clue who you are, even if you have contacted it 20 times in the last 2 minutes. Every request is treated as a new request from an unknown entity, making it stateless. The server disconnects after the response is sent, so it is non-connected (except to handle requests) and you can take time to determine the next course of action, making it asynchronous.

Now some may argue with the last line of the last paragraph, thinking “if I am logged in, I can’t walk away for a long time; I have to fill in the form within X minutes or I have to start over”. This is true, but as long as you have some means of saving state to the server, you can pick up with the remainder of your work at any time. And, if you are not logged in, you can keep the page open, shut down your computer, and come back and click links on the page hours, or even days, later. So the basic nature of hypertext is stateless. That is our important takeaway to move to session and cookies.

Persistence

Persistence is the mechanism by which state is stored until a user is ready to use it again. Long term persistence is usually handled either by a database or files. Once the user has changed something, the information is normally persisted to these long term persistent stores.

In addition to long term persistence, there is short term persistence. This is where cache comes in. Cache is persisting of state in memory. What about session? Session is a cache that includes bits to tie information to a particular user. Session also allows for expiration of cached state based on a user logging out (immediate) or timing out (end of session time). When either condition is met (log out or time out), the user specific state is cleared.

It should be noted that the main difference between a database, files, memory cache and session is where the state is stored. This is important for reliability and speed. Storing in memory fails if the machine is rebooted (or worse, the power pulled), but it is much faster to retrieve from memory than involve I/O (either pulling from disk, over the network or both).

Then what are cookies? Cookies are another form of persistence mechanism, only the information is stored on the client. The cookie may simply persist a user ID (to retrieve from session) or it may persist all of the user’s data (impractical over a certain amount of information). If it is just an ID, you need another method to pull the rest of the information, which means persistence on the server.

In Summary, for this section of this post (there are others, just not relevant):

Server Side Persistence

Database

Files (XML or otherwise)

Cache (temporary)

Session (temporary)

Client Side Persistence

Cookies

Kludges, aka sessions and cookies

This may seem unfair calling these items kludges, but let’s examine. You have a stateless mechanism (web pages), as they do not hold state. The client is only connected for the length of a request and either errors or fulfills the request and then shuts it down. The request is forgotten after the response is sent.

But, you need state. So you first create a means of storing information on the client. This is the cookie. Think of a cookie as a property bag storing name/value pairs. This is sent to the client as part of the header of the response (ie, the part you don’t see in your browser). The client then stores it and sends it back with every request to this server.

But what happens if you need more information than can be stored in a cookie. Then you create an in-memory cache tied to the user via some type of session ID. If you have been paying attention, you realize the session ID is sent to the user with every request and sent back in.

But, what if people find out cookies can be used for things they don’t like and turn them off? This was an early problem with the web (mid 90s). Browser developers put the ability to refuse cookies (the cookie is still sent, but it is not stored, so it will not be sent with the next request). To solve this, another cookie store was created on the client side and they called these “server cookies”. When the user decided to turn off cookies, these cookies stay alive, as they are in a different store.

The store for server cookies was designed to purge when the user left the site, but since the developer on the server side has some control, this did not always happen. The end result is while it is harder to turn off server cookies in all browsers, it is not impossible.

Are Session and Cookies bad in ASP.NET MVC?

If you have read this far, it should be obvious that session and cookies are simply a persistence mechanism between stateless requests. Are they bad? Yes and no. In many cases, cookies and session are a crutch to cover up bad architecture and design. In those cases, I would say they are bad. Overall, I would question the use of session and cookies in ASP.NET MVC, especially if you don’t truly understand the nature of Hypertext communication (and no, this session is not the 101 class).

On the original post, Ignat Andrei posted “And if it is a problem – it’s not only a MVC problem – it’s an ASP.NET problem.”. Correct, it is a problem in ASP.NET, as well. Or at least a variation of the same kludgy solution. In fact, it is a “problem” in all HTTP centered apps that allow for session (web server controlled in-memory cache) and cookies.

One of the beauties of ASP.NET MVC is you can get rid of a false reliance on the Session object and/or cookies for most of your work. Not that MVC is really that different from any other web application (it is just a pattern), but that the pattern should get you thinking in paradigms where you don’t rely on kludgy state mechanisms.

ASP.NET MVC was created due to a couple of reasons. The most obvious is Ruby on Rails, since Rails uses the MVC pattern. But another concern is pushing people towards a paradigm that encourages use of best practices, primarily separation of concerns. While the paradigm works fairly well, in many cases, you don’t solve process and people problems with technology, at least not completely.

Now back to the question of bad for session. Why would it be bad? One reason is users generally use session as a crutch to store page items. This is overcoming bad architecture with bad practices.

Another reason is session takes up space on the server. You don’t get session for free. When you use session, you take up memory and reduce scale. This is not always bad, but you do have to be aware of it. The question comes down whether scalability is an issue. The same is true of any form of in-memory cache, but you can easily switch to a distributed cache if you are using MemoryCache, by using a factory or provider pattern; you don’t have the easy configuration change type of switch going from Session to distributed (at least not without adapters) and even if you could, the purpose and pattern are different.

Another reason is IIS recycles. Not often that it screws over most of your users, but it does. Since HTTP is stateless, the user can still request pages, but now he is hosed as far as anything stuck in session goes. The same can be true of having to log back in, depending on your mechanisms, however.

Another reason is web farms. Not a big deal for the single server “share my club” type of site, but if you have a site on multiple servers, you end up with session in memory on all of the servers (at least the potential of that). ouch!

Now the comeback is “Greg, I use session state in SQL Server”. Great! Then why use session at all? If you are taking the trip to SQL Server for every hit, why incur the consumption of extra space for items that may never be requested. And why incur the extra overhead of clearing a user’s session, including the piece where you clear the data out of SQL Server. It is pure overhead at this point.

Does this mean never use session? Let’s look at that in a section entitled …

What if I feel the need to use Session in ASP.NET MVC

Did you not read the rest of this post? Okay, so there is an alternative: Use TempData instead. It is an abstraction in ASP.NET MVC on top of the old ASP.NET session. And it allows you to more easily change types of persistence. Realize this is still an abstraction on top of a kludge, but it is a useful abstraction as it is easily switched to a better method.

But you did not answer if it was bad …

As with everything in life, there is no one size fits all answer. As an example, I present adriamycin. Originally developed as an antibiotic, it was found to be too toxic. But it is rather effective on certain types of cancer, so while it is a poison, and should generally not be consumed, there are times it is the most effective path.

Read that again. What I am saying is I would generally avoid session like the plague, but there are always exceptions.

If you want a rule of thumb: Avoid the use of session to store temporary values, unless you can find no other mechanism, or the factors of developing the application make session a better option. The same can be said of storing in cookies.

The Aspects of Development and Session

All of this stems back to the aspects of development. There are five major aspects to consider:

  • Availability
  • Scalability
  • Maintainability
  • Performance
  • Security
  • In addition to these, time and budget have to be considered, as they are valid concerns. If using session is needed for time, for example, you may have to code it that way. But make sure there is time to remove it later if you find you are using it in a kludgy manner. The same is true of budget, but then time is money.

To figure what is best, you look at each application uniquely and determine the best mix of the aspects. If scalability is the biggest issue, you may lose some performance to better separate out concerns (monoliths run fast, but scale horribly). In financial applications, security is often the biggest concern. In general, maintainability (or the lack thereof) cost more money than any other aspect.

Summary

Session (and cookies) are often used as persistent mechanisms of convenience. They were both built as kludges to make a stateless system stateful. As such, unless you have a proper reason for using them, you should not use them. And, if you have a proper reason, you should examine it and see the proper reason is based more on familiarity than true need.

But, the above advice should be said for every architecture/design you have. You should examine all of your assumptions and ensure you have made proper decisions. Only then can you be sure things are solid (nice play on words, eh?)

Realize that the above has a caveat. You have to be able to examine within the time constraints of the project. If you can’t deliver, it does not matter how good the product is.

For your club website, session, while not the best solution, is probably fine. The same is true for the use of cookies. Once you move into the Enterprise space, however, these kludges can become costly.

Peace and Grace,
Greg

Twitter: @gbworld

The Importance of Context


We just had an issue at work with a new internal client using a service call we have set up differently than previous users. Let me see if I can relate the problem at a high level.

Suppose you have a website that is global, like a blog. You have ads on the page that pay for your blog site, so you want them to show. In your code, you serve English ads for the English speaking audience and Spanish ads for the Spanish speaking audience, etc. But, if an individual ad is ONLY available in English, you want it shown none-the-less. So, you roll up on the individual level.

Under the hood, you have a service that pulls the ad and automatically rolls up on ads only available in the “default language” (in this case, US English).

Now, you expand the service to other items and one of them is polls and you decide to use the service to not only display the poll, but create the link to navigate to the poll (yes, I know a method that solves multiple purpose ends up being a really bad jack of trades, but roll with me, as I inherited this). You only want to show the items in that particular language.

That is the crux of the problem. The norm is to rollup, but this one client does not want that. (Actually, this gets much deeper and if I were to explain the architectural solution employed, you would not believe me, as it sounds so preposterous).

Thinking RESTful?

I have the question mark above, as this really does not have anything to do with REST, but is something that you more often see in REST as you are dealing with representations of state. In order to pull an customer from a system in REST, whether by ID or last name, the object (state) has to contain the information you are using to pull it. Not 100% true, but the norm.

There is a good reason to employ this in nearly any (or perhaps all) situations where you are getting information outside of your process space (for example, calling another server). Let me illustrate:

Example 1: No Context

Give me all users with red hair that live in Nashville and speak fluent Mandarin Chinese and Japanese

Answer: John Doe and Jane Doe

Example 2: Context

Answer:
{ Name: John Doe; Location: Nashville, TN; Languages: {english, mandarin, japanese, german}, Eyes: blue}
{ Name: Jane Doe; Location: Nashville, TN; Languages: {english, mandarin, japanese} Eyes: Brown}

Someone can now state, but I only want user’s with blue eyes and filter at the end. This is impossible if the context is not thrown back.

Now, in the software I am working with, we actually get a set that would include people that only spoke english, due to rollback, so I would have to filter out those who only spoke English after the fact. But I can do this.

There is another benefit to context. Suppose I thought I asked for films that had Mark Hamill (think Luke Skywalker) in them and got back the following:

  • Friday the 13th
  • Footloose
  • The River Wild

Something obviously wrong. But, if I had the context returned in the answer, I would see that the context was Kevin Bacon and not Mark Hamill. Oops. Now I can check and see if I accidentally sent in Kevin Bacon and it was my problem. Without the context, however, I have to know enough about movies to make an educated guess.

What’s the Point?

In general, when you are dealing with software, you should return the criteria for a request with the response. I say, in general, as there are possible exceptions to the rule, especially when you deal with network hops introducing latency into systems where reduction of the data set to what is required to render the answer, without the extra weight of the context. I say possible exceptions, as I believe you should make sure the latency issue is big enough to create a system that is harder to debug and maintain.

Is this realistic or necessary? Since many developers don’t understand how TCP/IP and HTTP work, and rarely ever have to dig that deep, the answer might seem to be no. But, if you understand how the underlying protocols work, you know there is a lot of chattiness and redundancy in the system.

The chattiness and redundancy don’t add anything most of the time, but they are very valuable when a browser team needs to filter out items that can, as an example, cause a cross-site scripting issue. By examining the headers sent, we can see that some of the information is coming from a different site than requested.

This may not be the best example, but considering that HTTP is stateless and context is passed with every request, there is a lot of data there to examine if there is an issue. Ultimately, being able to quickly solve issues is more important, at least in most cases, than reducing the size of a payload by eliminating context.

Summary

One of the issues that ends up costing a lot of time is trying to determine what is causing an issue in a system. Without parroting back the context, it is often difficult to determine whether the issue was sending in incorrect data or a server/service bug. By comparing context sent in to context returned, you can more easily determine the nature of the problem and the solution. This can be worth its weight in gold.

Peace and Grace,
Greg

Twitter: @gbworld

Adventures in Refactoring: Eliminating Switches in Code


Before I get started, I need to explain that there are many constraints in this project that I cannot get rid of. For this reason, I focusing solely on refactoring out branching logic that will continue to grow if left as is.

Let’s describe the situation. A system has been put in place for other teams to put “notifications” into “queues”. The terms are in quotes, as the “notification” is an item placed in a list of work to do and the “queue” is a SharePoint list that both “queues” items and is used to update status. This is shown below:

image

Architecturally, I know of many better patterns, but this is what I have to work with. Technically, I have to facilitate the idea of serializing objects to strings so a windows service can read the string, serialize the correct object and do some work on it. The side of the equation being refactored in this blog entry is the serialization of multiple derived types

Object enter the system via the “Notification” service, which accepts an object of the type NotificationBase:

Code Snippet
  1. public interface INotificationService
  2. {
  3.     Guid NotifyChange(NotificationBase message, string queue);
  4.     ProcessStatus GetProcessStatus(Guid notificationId, string queue);
  5. }
  6.  
  7. public class NotificationBase : IExtensibleDataObject
  8. {
  9.     public Guid NotificationId { get; set; }
  10.     public string Description { get; set; }
  11.     public ExtensionDataObject ExtensionData { get; set; }
  12. }

The actual client creates a type derived from this base, as in the following:

Code Snippet
  1. public class AdvisorNotification : NotificationBase
  2. {
  3.     public Guid AdvisorID { get; set; }
  4.     public OperationType Action { get; set; }
  5.     public string Version { get; set; }
  6.     public List<GuidMap> CloningChanges { get; set; }
  7. }

There are potentially infinite types of objects, as more and more people use this service to enter work into “queues”.

The current pattern to facilitate the serialization and storage of the items in the “queues” was to switch on type.

Code Snippet
  1. string messageString;
  2.  
  3. if (notification is AdvisorNotification)
  4. {
  5.     var advisorNotification = (AdvisorNotification)notification;
  6.     messageString = SerializationHelper.SerializeToXml(advisorNotification);
  7. }
  8. else if (notification is WorkflowNotification)
  9. {
  10.     var workflowNotification = (WorkflowNotification)notification;
  11.     messageString = SerializationHelper.SerializeToXml(workflowNotification);
  12. }

The primary reason it was done this way is familiarity with the switch type of logic. The problem, from a maintenance standpoint, is I have to update code every time a new derived notification type is created. This means software that is constantly being updated. It also goes against the Single Responsibility Principle (a SOLID software practice) and the code smell of having to change logic in multiple places to facilitate a simple change.

There are a couple of ways to refactor out this pattern. A common approach, for example, is polymorphism. But due to various constraints, I cannot head this direction. I also see a simpler way to solve the problem at hand and show how to simplify the software.

It would be nice to have this down to one line, like so:

Code Snippet
  1. string messageString = SerializationHelper.SerializeToXml(notification, false);

But the serializer serializes to NotificationBase, which causes an exception. So, a look at the serializer is in order:

Code Snippet
  1. public static string SerializeToXml<T>(T obj)
  2. {
  3.     var serializer = new XmlSerializer(typeof(T));
  4.  
  5.     using (var writer = new StringWriter())
  6.     {
  7.         serializer.Serialize(writer, obj);
  8.         return writer.ToString();
  9.     }
  10. }

The issue here is typeof(T). Why? This uses the generic passed. Since the generic is implicitly stated, the compiler examines and creates instructions as if this line was fed:

Code Snippet
  1. string messageString = SerializationHelper.SerializeToXml<NotificationBase>(notification, false);

What I need the compiler to do is set it up like this:

Code Snippet
  1. string messageString = SerializationHelper.SerializeToXml<AdvisorNotification>(notification, false);

Now, your thought process, as this time, might be to dynamically supply the generic type, but the following syntax does not work:

Code Snippet
  1. string messageString = SerializationHelper.SerializeToXml<notification.GetType()>(notification, false);

It would be nice if Microsoft made a very easy facility to create generic types dynamically like the above, but it is not possible today (maybe in C# vNext? – not .NET 4.5, as it is in RC). Sans the syntactical sugar, let’s focus on the issue for this problem, which is needing to serialize the derived type rather not the base type. To understand this, look at the following code and comments:

Code Snippet
  1. Type type1 = typeof (T); //type1 == NotificationBase;
  2. Type type2 = obj.GetType(); //type2 == AdvisorNotification

 

Notice that the typeof(T) returns the base object, as that is how the derived object is passed through the system, as in the following method signature:

Code Snippet
  1. private static Guid NotifyDataChange(NotificationBase notification, string queueName)

But, obj.GetType() returns the derived object, as that is its “true identity” (Clark Kent in trouble now?). So, back to our serialize method:

Code Snippet
  1. public static string SerializeToXml<T>(T obj)
  2. {
  3.     var serializer = new XmlSerializer(typeof(T));
  4.  
  5.     using (var writer = new StringWriter())
  6.     {
  7.         serializer.Serialize(writer, obj);
  8.         return writer.ToString();
  9.     }
  10. }

I could simply do the following:

Code Snippet
  1. public static string SerializeToXml<T>(T obj)
  2. {
  3.     var serializer = new XmlSerializer(obj.GetType());
  4.  
  5.     using (var writer = new StringWriter())
  6.     {
  7.         serializer.Serialize(writer, obj);
  8.         return writer.ToString();
  9.     }
  10. }

The problem here is the code is already implemented. Perhaps some of the clients can only use the typeof(T) method of serialization. Since I don’t know this, I am going to use a safer refactoring. First, I add an optional parameter called useTypeOfT.

Code Snippet
  1. public static string SerializeToXml<T>(T obj, bool useTypeOfT = true)

Then, I then separate the type from the intiailization of the XmlSerializer:

Code Snippet
  1. Type type = typeof(T);
  2. var serializer = new XmlSerializer(type);

Finally, I add a simple branch based on useTypeOfT to either use typeof(T) or obj.GetType(), with the default being the currently used typeof(T).

Code Snippet
  1. Type type = useTypeOf ? typeof(T) : obj.GetType();
  2. var serializer = new XmlSerializer(type);

Here is the entire routine with these changes.

Code Snippet
  1. public static string SerializeToXml<T>(T obj, bool useTypeOf = true)
  2. {
  3.     Type type = useTypeOf ? typeof(T) : obj.GetType();
  4.     var serializer = new XmlSerializer(type);
  5.  
  6.     using (var writer = new StringWriter())
  7.     {
  8.         serializer.Serialize(writer, obj);
  9.         return writer.ToString();
  10.     }
  11. }

So, now let’s go back to our branching logic and refactor it down. First, let’s look at the original:

Code Snippet
  1. string messageString;
  2.  
  3. if (notification is AdvisorNotification)
  4. {
  5.     var advisorNotification = (AdvisorNotification)notification;
  6.     messageString = SerializationHelper.SerializeToXml(advisorNotification);
  7. }
  8. else if (notification is WorkflowNotification)
  9. {
  10.     var workflowNotification = (WorkflowNotification)notification;
  11.     messageString = SerializationHelper.SerializeToXml(workflowNotification);
  12. }

The first change would be set the optional argument up (useTypeOfT) and set it to false, as in the following. Simple, small change, so it is easy to test.

Code Snippet
  1. string messageString;
  2.  
  3. if (notification is AdvisorNotification)
  4. {
  5.     var advisorNotification = (AdvisorNotification)notification;
  6.     messageString = SerializationHelper.SerializeToXml(advisorNotification, false);
  7. }
  8. else if (notification is WorkflowNotification)
  9. {
  10.     var workflowNotification = (WorkflowNotification)notification;
  11.     messageString = SerializationHelper.SerializeToXml(workflowNotification, false);
  12. }

Now, since it can determine the correct derived type, we can get rid of the recasting of the base class into child objects.

Code Snippet
  1. string messageString;
  2.  
  3. if (notification is AdvisorNotification)
  4. {
  5.     messageString = SerializationHelper.SerializeToXml(notification, false);
  6. }
  7. else if (notification is WorkflowNotification)
  8. {
  9.     messageString = SerializationHelper.SerializeToXml(notification, false);
  10. }

If we look at the code above, we now notice the lines in the branches are identical, so we can refactor out the entire branching logic. This leaves us with the desired end condition, as follows:

Code Snippet
  1. string messageString = SerializationHelper.SerializeToXml(notification, false);

The switch statement is now gone and I can add as many different types of derived notification types as needed to implement the full system, without any code changes to my service or serializer. And, since I have left the base logic alone, I should not have any issues with clients who have implemented the library using typeof(T) to determine type.

Summary

Simplicity is the mark of good software. As software features grow, however, solutions often get more complex rather than simpler. This means the developer has to “take the bull by the horns” and force refactoring to simplicity.

In this blog entry, I focused on one type of refactoring to get rid of unnecessary branching logic. The logic was forced by a “flaw” in a serializer, but cascaded up to create a solution that required branching logic changes every time a new type was added to the system, rather than the simple addition of new derived domain objects. By the end, we had code that required no changes to facilitate new “queued” types.

There are likely to be a variety of other ways to “skin this cat”. For example, we could use polymorphism to get rid of the branch. If there were major logic branches (changes in behavior based on type, for example, rather than just new derived types), I would definitely explore this option. Another possible option is getting deeper into the covariance and contravariance features added to C# in .NET 4.0. If someone would like to link this type of example, I would love to read it.

Peace and Grace,
Greg

Twitter: @gbworld

Anti-Pattern: Trusting User Input taken to the Nth


Most developers are well aware that we should not trust user input. In the 90s, many learned this when they hid pricing information in hidden fields on a form and clever users changed the price of an item to a penny and successfully bought it. Others have learned this through SQL injection hacks.

But it goes beyond information that users input via a form. Anything you have to store that affects behavior in your system should be validated prior to attempting to initiate the behavior.

Just yesterday, I found an interesting form of trust that is particularly insidious. The system has a service that accepts an object from an outside system. It also takes an ID that it uses like a primary key in our system. Rather than go on typing, let me illustrate.

The system uses a variety of queues and there are objects that need to be queued. I have faked the actual objects (to protect the innocent?), but the core concept is like this. There is a base object that indicates a queued object.

Code Snippet
  1. public class QueuedObjectBase
  2. {
  3.     public Guid Id { get; set; }
  4.     public string Description { get; set; }
  5. }

 

In this object is an Id, which is used to store the object in a queue and a description so the consuming team can determine what an object was designed for. I am not worried about the description, although I do consider it a bit superfluous in this application, so I will get back to this ID in a moment.

The classes representing the objects that are queued derive from the QueuedObjectBase class, like so:

Code Snippet
  1. public class CatalogQueuedObject : QueuedObjectBase
  2. {
  3.     public Guid CatalogId { get; set; }
  4.     public string Name { get; set; }
  5.     public int CustomerSet { get; set; }
  6.     public string Country { get; set; }
  7.     public string Langauge { get; set; }
  8. }
  9.  
  10. public class BrowseQueuedObject : QueuedObjectBase
  11. {
  12.     public Guid BrowseId { get; set; }
  13.     public string Name { get; set; }
  14.     public string PageUrl { get; set; }
  15. }

The actual object (as derived type) is stored in a serialized format (not a paradigm I like, but “it is what it is”). But, before it is stored, the Id and Description are pulled. This is actually accomplished in a CMS system (used like a database, which is another paradigm I dislike), but it is easier to illustrate when we look at a database.

Suppose we have a table designed like this:

Code Snippet
  1. CREATE TABLE StorageQueue(Id uniqueidentifier PRIMARY KEY,Descript varchar(1024) NULL,Serialized varchar(max) NOT NULL)

If you want to see this visually, you end up with this:

clip_image002

When a Queue object enters the system, the following happens:

  1. The Id is pulled out as a variable to go in the Id field
  2. The Description is pulled out as a variable to go in the Descript field
  3. The entire object is serialized to go in the serialized field

The code would look something like this:

Code Snippet
  1. private void StoreObjectInQueue(QueuedObjectBase objectToStore)
  2. {
  3.     string queue = GetQueueName(objectToStore);
  4.     Guid id = objectToStore.Id;
  5.     string description = objectToStore.Description;
  6.     string serializedObject = SerializedObject(objectToStore);
  7.     QueueRepository.Store(queue, id, description, serializedObject);
  8. }

Assume the repository performs something like the following:

Code Snippet
  1. INSERT INTO StorageQueue (Id, Descript, Serialized)
  2. VALUES (@id, @descript, @serialized)

Seeing the problem yet? Let’s illustrate by setting up some code that is guaranteed to create a problem.

Code Snippet
  1. private void ClientSide_StoreObjects()
  2. {
  3.     Guid id = GetIdForQueue(1);
  4.     Guid browseId = Guid.NewGuid();
  5.     Guid catalogId = Guid.NewGuid();
  6.     string browseName = "";
  7.     string browsePageUrl = "";
  8.  
  9.     //First a browse queue
  10.     var browseQueuedObject = new BrowseQueuedObject
  11.     {
  12.         BrowseId = browseId,
  13.         Description = null,
  14.         Id = id,
  15.         Name = browseName,
  16.         PageUrl = browsePageUrl
  17.     };
  18.  
  19.     StoreObjectInQueue(browseQueuedObject);
  20.  
  21.     //Then a catalog queue
  22.     var catalogQueuedObject = new CatalogQueuedObject
  23.     {
  24.         CatalogId = catalogId,
  25.         Country = "us",
  26.         CustomerSet = 666,
  27.         Description = null,
  28.         Id = id,
  29.         Langauge = "en",
  30.         Name = "Id Clash"
  31.     };
  32.  
  33.     StoreObjectInQueue(catalogQueuedObject);
  34. }

Got it now? Yes, since we have given the client the opportunity to control our Id (the primary key in the database scenario), they can send us duplicate information.

In the way we currently have this set up, the second call throws an exception, so we can inform the client they cannot have that Id, as they have already used it in the queue. But, what if we are using a different persistent mechanism that defaults to overwriting the browseQueuedObject with a catalogQueuedObject since they have the same id?

Here is the client code to pull an object from queue that is a browse queued object.

Code Snippet
  1. private void ClientSide_PullBrowseObject()
  2. {
  3.     Guid id = GetIdForQueue(1);
  4.     BrowseQueuedObject browseQueuedObject = (BrowseQueuedObject)GetItemFromQueue(id);
  5.     //Work on browse queued object here
  6. }

What happens now is a casting exception, as the actual object returned is a catalogQueuedObject instead of a browseQueuedObject.

Now, you probably are saying something like “we can protect the user by adding a type and then validating”, but the core problem is we have tightly coupled the client to our storage mechanism when we allow them to create the key. As this is our storage engine, we should be in control of the keys. We can pass the key back for the client to store so they can communicate (one option) or we can store their id for the object (or similar object on their side) along with a type. We can also make multiple queues. But, we should not have them deciding how we identify the object.

When I mentioned this, the first argument I heard was “well this is an internal application and our developers would never do that kind of thing”. This is an argument I have heard numerous times for a variety of bad practices. The problem is we might eventually open this system to people outside of our organization. At first, it might be partners. This increases the likelihood of clashes, but not likely someone will purposefully try to crash the system. But, one day we might want to use this service like FaceBook or Twitter and open to the world. At that point, you might have hackers trying to overwrite objects to see if they can crash the site. This might just be for fun, but they may also be looking for exception messages to determine how to hack deeper into our system.

The second argument I heard was “we are using globally unique identifiers”. This sounds good and well, but while GUIDs are designed to be statistically “impossible” to end up with the same ID, over time the “impossible” becomes more possible, especially with multiple people creating IDs. But this is not the end of the story. There are people who have created their own GUID algorithms. What if someone create a bad algorithm and shared it? In addition, I have the power to manually create GUIDs like so:

Code Snippet
  1. var id = Guid.Parse("f25aa76a-b9a5-4fb9-ae08-0869312cf573");

Perhaps I see a GUID coming back and decide to use it? What would happen then.

Now, there are a lot of things wrong with storing serialized objects without some type of typing information, especially when the “data” is only useful when deserialized back to an object. It is sloppy and prone to errors. But it is further exacerbated when you allow the client to dictate the key for the object.

Summary

In this blog entry I focused on not trusting user input. Rather than hit the common scenarios, however, I focused on allowing a user to choose an identity for the object in our system. While this might not seem like a common enough scenario to blog about, I have seen numerous systems cobbled together in this manner lately.

There are a couple of reasons why this is a very bad practices. First, the user might duplicate the key information and either overwrite data or cause an error. Second, and more important in an internal application, the user is now tightly coupled to our storage mechanism, making it impossible to change our implementation of storage without consulting everyone who is using the system.

Update

After airing my concerns on allow a user to specify the primary key for the item in our data store, the decision was made to check for a duplicate and send back an exception if it already existed. This may sound like a good compromise, but it puts a lot of extra work on each side (checking and throwing the exception on our end and handling exceptions on their end (possibly even transaction logic?)). We can avoid all of this extra work if we simply do it right the first time.

Peace and Grace,
Greg

Twitter: @gbworld