Anti-Pattern: Trusting User Input taken to the Nth

Most developers are well aware that we should not trust user input. In the 90s, many learned this when they hid pricing information in hidden fields on a form and clever users changed the price of an item to a penny and successfully bought it. Others have learned this through SQL injection hacks.

But it goes beyond information that users input via a form. Anything you have to store that affects behavior in your system should be validated prior to attempting to initiate the behavior.

Just yesterday, I found an interesting form of trust that is particularly insidious. The system has a service that accepts an object from an outside system. It also takes an ID that it uses like a primary key in our system. Rather than go on typing, let me illustrate.

The system uses a variety of queues and there are objects that need to be queued. I have faked the actual objects (to protect the innocent?), but the core concept is like this. There is a base object that indicates a queued object.

Code Snippet
  1. public class QueuedObjectBase
  2. {
  3.     public Guid Id { get; set; }
  4.     public string Description { get; set; }
  5. }


In this object is an Id, which is used to store the object in a queue and a description so the consuming team can determine what an object was designed for. I am not worried about the description, although I do consider it a bit superfluous in this application, so I will get back to this ID in a moment.

The classes representing the objects that are queued derive from the QueuedObjectBase class, like so:

Code Snippet
  1. public class CatalogQueuedObject : QueuedObjectBase
  2. {
  3.     public Guid CatalogId { get; set; }
  4.     public string Name { get; set; }
  5.     public int CustomerSet { get; set; }
  6.     public string Country { get; set; }
  7.     public string Langauge { get; set; }
  8. }
  10. public class BrowseQueuedObject : QueuedObjectBase
  11. {
  12.     public Guid BrowseId { get; set; }
  13.     public string Name { get; set; }
  14.     public string PageUrl { get; set; }
  15. }

The actual object (as derived type) is stored in a serialized format (not a paradigm I like, but “it is what it is”). But, before it is stored, the Id and Description are pulled. This is actually accomplished in a CMS system (used like a database, which is another paradigm I dislike), but it is easier to illustrate when we look at a database.

Suppose we have a table designed like this:

Code Snippet
  1. CREATE TABLE StorageQueue(Id uniqueidentifier PRIMARY KEY,Descript varchar(1024) NULL,Serialized varchar(max) NOT NULL)

If you want to see this visually, you end up with this:


When a Queue object enters the system, the following happens:

  1. The Id is pulled out as a variable to go in the Id field
  2. The Description is pulled out as a variable to go in the Descript field
  3. The entire object is serialized to go in the serialized field

The code would look something like this:

Code Snippet
  1. private void StoreObjectInQueue(QueuedObjectBase objectToStore)
  2. {
  3.     string queue = GetQueueName(objectToStore);
  4.     Guid id = objectToStore.Id;
  5.     string description = objectToStore.Description;
  6.     string serializedObject = SerializedObject(objectToStore);
  7.     QueueRepository.Store(queue, id, description, serializedObject);
  8. }

Assume the repository performs something like the following:

Code Snippet
  1. INSERT INTO StorageQueue (Id, Descript, Serialized)
  2. VALUES (@id, @descript, @serialized)

Seeing the problem yet? Let’s illustrate by setting up some code that is guaranteed to create a problem.

Code Snippet
  1. private void ClientSide_StoreObjects()
  2. {
  3.     Guid id = GetIdForQueue(1);
  4.     Guid browseId = Guid.NewGuid();
  5.     Guid catalogId = Guid.NewGuid();
  6.     string browseName = "";
  7.     string browsePageUrl = "";
  9.     //First a browse queue
  10.     var browseQueuedObject = new BrowseQueuedObject
  11.     {
  12.         BrowseId = browseId,
  13.         Description = null,
  14.         Id = id,
  15.         Name = browseName,
  16.         PageUrl = browsePageUrl
  17.     };
  19.     StoreObjectInQueue(browseQueuedObject);
  21.     //Then a catalog queue
  22.     var catalogQueuedObject = new CatalogQueuedObject
  23.     {
  24.         CatalogId = catalogId,
  25.         Country = "us",
  26.         CustomerSet = 666,
  27.         Description = null,
  28.         Id = id,
  29.         Langauge = "en",
  30.         Name = "Id Clash"
  31.     };
  33.     StoreObjectInQueue(catalogQueuedObject);
  34. }

Got it now? Yes, since we have given the client the opportunity to control our Id (the primary key in the database scenario), they can send us duplicate information.

In the way we currently have this set up, the second call throws an exception, so we can inform the client they cannot have that Id, as they have already used it in the queue. But, what if we are using a different persistent mechanism that defaults to overwriting the browseQueuedObject with a catalogQueuedObject since they have the same id?

Here is the client code to pull an object from queue that is a browse queued object.

Code Snippet
  1. private void ClientSide_PullBrowseObject()
  2. {
  3.     Guid id = GetIdForQueue(1);
  4.     BrowseQueuedObject browseQueuedObject = (BrowseQueuedObject)GetItemFromQueue(id);
  5.     //Work on browse queued object here
  6. }

What happens now is a casting exception, as the actual object returned is a catalogQueuedObject instead of a browseQueuedObject.

Now, you probably are saying something like “we can protect the user by adding a type and then validating”, but the core problem is we have tightly coupled the client to our storage mechanism when we allow them to create the key. As this is our storage engine, we should be in control of the keys. We can pass the key back for the client to store so they can communicate (one option) or we can store their id for the object (or similar object on their side) along with a type. We can also make multiple queues. But, we should not have them deciding how we identify the object.

When I mentioned this, the first argument I heard was “well this is an internal application and our developers would never do that kind of thing”. This is an argument I have heard numerous times for a variety of bad practices. The problem is we might eventually open this system to people outside of our organization. At first, it might be partners. This increases the likelihood of clashes, but not likely someone will purposefully try to crash the system. But, one day we might want to use this service like FaceBook or Twitter and open to the world. At that point, you might have hackers trying to overwrite objects to see if they can crash the site. This might just be for fun, but they may also be looking for exception messages to determine how to hack deeper into our system.

The second argument I heard was “we are using globally unique identifiers”. This sounds good and well, but while GUIDs are designed to be statistically “impossible” to end up with the same ID, over time the “impossible” becomes more possible, especially with multiple people creating IDs. But this is not the end of the story. There are people who have created their own GUID algorithms. What if someone create a bad algorithm and shared it? In addition, I have the power to manually create GUIDs like so:

Code Snippet
  1. var id = Guid.Parse("f25aa76a-b9a5-4fb9-ae08-0869312cf573");

Perhaps I see a GUID coming back and decide to use it? What would happen then.

Now, there are a lot of things wrong with storing serialized objects without some type of typing information, especially when the “data” is only useful when deserialized back to an object. It is sloppy and prone to errors. But it is further exacerbated when you allow the client to dictate the key for the object.


In this blog entry I focused on not trusting user input. Rather than hit the common scenarios, however, I focused on allowing a user to choose an identity for the object in our system. While this might not seem like a common enough scenario to blog about, I have seen numerous systems cobbled together in this manner lately.

There are a couple of reasons why this is a very bad practices. First, the user might duplicate the key information and either overwrite data or cause an error. Second, and more important in an internal application, the user is now tightly coupled to our storage mechanism, making it impossible to change our implementation of storage without consulting everyone who is using the system.


After airing my concerns on allow a user to specify the primary key for the item in our data store, the decision was made to check for a duplicate and send back an exception if it already existed. This may sound like a good compromise, but it puts a lot of extra work on each side (checking and throwing the exception on our end and handling exceptions on their end (possibly even transaction logic?)). We can avoid all of this extra work if we simply do it right the first time.

Peace and Grace,

Twitter: @gbworld


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: