Microservices in .NET part 2: Silver Bullets and Free Lunches?

I have spent the better part of this week digging into micro services and I love the idea. Here are some benefits I see that can be realized by using a microservices approach:

  1. The granularity level allows developers to stay on a single context while solving a problem. This singularity of focus makes it much easier to dig into the details of a specific object or set of objects. In many cases, it will quickly expose poor planning in an organization and provide rationale for fixing the process. As an example, a service that has a lot of churn is probably one that was not planned out well (not talking finding additional uses, but rather having to rebuild contracts and re-architect the service on a regular basis)
  2. The services are simple,making it easy to maintain the individual components.
  3. The methodology forces a good separation of concerns.
  4. You can use the best tool for the job rather than stick to a single platform, programming language or paradigm, etc. This is a dual edged sword, as I will uncover a bit later.
  5. Isolated solution problems can easily be fixed without much impact. If you find your employee microservice has an issue, you can fix it without deploying the entire solution.
  6. Working with multiple services enables the use of concepts like Continuous Integration (CI) and Continuous Delivery (CD). This is also dual edged, as you almost have to go to a full blown CD implementation to use microservices architectures. I will hit this later, as well.
  7. You can get multiple teams working independently of each other. This was always possible, of course, as I have pointed out in my Core As Application blog entries (one here), if you will take the time to plan out you contracts and domain models first. (NOTE: In 2010, I was told “you cannot design contracts first, as you don’t know all of the requirements up front”. By 2011, I proved this wrong by delivering using a contract first approach, both ahead of time and under budget – a bit of planning goes a long way).
  8. Systems are loosely coupled and highly cohesive.

This is just a short list of the benefits. The problem I see is everyone is focusing on the benefits as if we have finally found the silver bullet (do you have werewolves in your organization?) and gained a free lunch. This article focuses on some of the downsides to microservices.


As you move to smaller and smaller services, there are many more parts that have to be deployed. In order to keep the solutions using the microservices up and running, you have to be able to push the services out to the correct location (URI?) so they can be contacted properly by the solutions using them. If you go to the nth degree, you could conceivably have tens, if not hundreds, of small services running in an Enterprise.

As each service is meant to be autonomous, this means you have to come up with a strategy for deployment for each. You also have to plan for high availability and failover. And there has to be a solid monitoring and instrumentation strategy in place. In short, you need all of the pieces of a good API Management strategy in place, and you need to do this BEFORE you start implementing microservices. And I have not even started focusing on how everything is wired together and load balancing of your solutions. On the plus side, once you solve this problem, you can tune each service independently.

There is a burden on the Dev side that needs to be tackled up front, as well. You need to start thinking about the requirements for monitoring, tracing and instrumenting the code base and ensure it is part of the template for every service. And you have to plan for failure, which is another topic.

As a final point on this topic, your dev and ops team(s) must be proficient in the combined concept of DevOps to have this be a success. Developers can no longer pitch items over to Ops with a deployment document. They have to be involved in joint discussions and help come up with plans for the individual services, as well as the bigger solutions.

Planning for Failure and Avoiding Failure

Services will fail. Looking at the plethora of articles on microservices, it is suggested you use patterns like circuit breakers (avoid hitting a failed service after a few attempts) and bulkheads (when enough “compartments” are “under water”, seal the rest of the solution from the failure point). This is a fine avoidance strategy, but what if the service failing is a critical component to the solution working?

Not mentioned in the articles I have read is a means of managing services after failure. Re-deploying is an option, and you can make redeployment easier using quickly set up virtual environments and/or containers, but what if reaching that portion of the network is the point of failure. I would love to hear comments on this next idea: Why not look at some form of registry for the services (part of API Management, similar to UDDI, etc.) or a master finder service that exists in various locations and that all applications are aware of. Another idea would be to include as part of the hyper-media specification backup service locations. But either of these solution further exacerbates the reliance on DevOps, creating even more need for planning solutions and monitoring released solutions.

I don’t see microservices working well without a good CI/CD strategy in place and some form of API management. The more I look into microservices the more I see the need for a system that can discover its various points on the fly (which leads me back to the ideas of using a finder service or utilizing hypermedia to inform the solutions utilizing microservices where other nodes exist).

Contracts and Versioning

When you develop applications as a single Visual Studio solution (thinking in terms of projects and not products?), you have the ability to change contracts as needed. After all, you have all of the code sitting in front of you, right? When you switch to an internal focus on services as released products, you can’t switch out contracts as easily. You have to come up with a versioning strategy.

I was in a conversation a few weeks ago where we discussed versioning. It was easy to see how URI changes for REST services required versioning, but one person disagreed when I stated changes to the objects you expose should be a reason for versioning in many instances. The answer was “we are using JSON, so it will not break the clients if you change the objects”. I think this topic deserves a side bar.

While it is true JSON allows a lot of leeway in reorganizing objects without physically breaking the client(s) using the service, there is also a concept of logical breakage. Adding a new property is generally less of a problem, unless that new element is critical for the microservice. Changing a property may also not cause breakage up front. As an example, you change from an int to a long as planning for the future. As long as the values do not exceed the greatest value for an int, there is no breakage on a client using an int on their version of the object. The issue here is it may be months or even years before a client breaks. And finding and fixing this particular breakage could be extremely difficult and lead to long down times.

There are going to be times when contract changes are necessary. In these cases, you will have to plan out the final end game, which will include both the client(s) and service(s) utilizing the new contract, as well as transitional architectures to get to the end game without introducing a “big bang” approach (which microservices are said to help us avoid). In short, you have to treat microservice changes the same way you approach changes on an external API (as a product). Here is a simple path for a minor change.

  1. Add a second version of the contract to the microservice and deploy (do not remove the earlier version at this time)
  2. Inform all service users the old contract is set to deprecate and create a reasonable schedule in conjunction with the consumers of the microservice
  3. Update the clients to use the new contract
  4. When all clients have updated, retire the old version of  the contract

This is standard operating procedure in external APIs, but not something most people think about a huge deal when the APIs are internal.

I am going to go back to a point I have made before. Planning is critical when working with small products like microservices. To avoid regular contract breakages, your architects and Subject Matter Experts (SMEs) need to make sure the big picture is outlined before heading down this path. And the plan has to be conveyed to the development teams, especially the leads. Development should be focused on their service when building, but there has to be someone minding the shop to ensure the contracts developed are not too restrictive based on the business needs for the solutions created from the services.

Duplication of Efforts

In theory, this will not happen in microservices, as we have the individual services focusing on single concerns. And, if we can imaging a world where every single class had a service (microservices to the extreme?) we can envision this, at least in theory. But should we break down to that granlular a level? I want to answer that question first.

In Martin Fowlers article, he talks about the Domain Driven Design (DDD) concept of a bounded context. A bounded context is a grouping of required state and behavior for a particular domain. Martin Fowler uses the following diagram to show two bounded contexts.

In the diagram above, you see some duplication in the bounded contexts in the form of duplication of customer and product. In a microservices architecture, you could conceivably move customer and product to its own service and avoid the duplication, but moving a concept out simply to avoid duplication is not the best motivation in all cases. If you can also make a customer or product business capability, I would wholeheartedly support this approach, but it is not always the case (another side bar).

When would you not want to separate out Customer and Product? In short, when the domain concept of these objects is different. In the sales context, a customer contains sales specific information, including terms of sale (net 60 days?) and other items that may not exist in a support context. If we are talking a company that ships products (as opposed to a service only company), we can add other contexts, like shipping and warehousing, that have radically different customer views. In the warehouse, a customer is completely unimportant, as they are focused on pulling orders. From a shipping standpoint, a customer is a name, a shipping address and a phone number. No need for any additional information. A customer microservice either spits out a complete object, allowing the services to filter (not a great idea from a security standpoint) or it provides multiple interfaces for each of the clients (duplication of efforts, but in a single service rather than multiple consumers and/or services, so it does not avoid duplication). A product can also be radically different in each of these contexts.

My advice to starting out is starting with bigger contexts and then decomposing as needed. The original “microservice” can act as an aggregator as you move to more granular approaches. Example of transitional states from the contexts above.

  1. Discover of duplication in the sales and support microservices leads to a decision the customer and product should be separate services
  2. New customer and product services created
  3. Sales and support services altered to use the new product and customer services
  4. new version of sales and support services created to avoid serving product and customer information
  5. Clients altered to use the new services as well as the sales and support services

This is one idea of migration, as we will discover in the next section.

Where do we Aggregate?

If we go back to the bounded context discussion in the last section, we see the need to aggregate. The question is where to we aggregate? You need to come up with a strategy for handling aggregation of information. I am still groking this, so I am not offering a solution at this time. Here are some options I can see.

Option 1 – Client: In a full microservices architecture, the client may be responsible for all aggregation. But what if the user’s client is a mobile application. The chattiness of a microservice architecture is hard enough to control across your internal multi-GB network infrastructure. Moving this out onto the Internet and cell networks expounds the latency. I am not saying this is a bad option in all cases, but if you opt for this approach, more focus on the latency issue is required from your mobile development team. On a positive note, if the client application can handle single service failures gracefully, you reduce the likelihood of a single point of failure.

Option 2 – Final service boundary: In this approach, the outermost service contacts the many microservices it requires to get work done and aggregates for the client. I find this more appealing, in general, for mobile clients. And it reduces the number of “proxies” required for web, simplifying the user interface client. As a negative, it creates a single point of failure that has to be handled.

Option 3 – Aggregation of dependencies: In this approach, the higher level service (closer to the client) aggregates what it requires to work for the client. At first, I liked this option the best, as it fits a SOA approach, but the more and more I read about the microservices idea, the more I see this as a potential combination of the bad points of the first two options, as you introduce numerous points of failure on the aggregate level while still potentially creating multiple points for latency in your client applications. I still think this might be something we can think through, so I am providing it.

If you can think of other options, feel free to add them in the comments.

Testing One, Two, Three

I won’t spend a lot of time on testability, but the more moving parts you have to test, the harder it is. To understand why this is so, create an application fully covered in unit tests at every level, but developed by different teams, and then integrate. The need for integration testing becomes very clear at this moment. And what if you are not only integrating multiple libraries, but multiple discrete, and very small, services. There is a lot of discipline.

I find the only reasonable answer is to have a full suite of unit tests and integration tests, as well as other forms of testing. To keep with the idea of Continuous Integration, only the smaller tests (unit tests) will be fired off with each CI build, but there will be a step in the CD cycle that exercises the full suite.

There is also a discipline change that has to occur (perhaps you do this already, but I find most people DON’T): You must now treat every defect as something that requires a test. You write the test before the fix to verify the bug. If you can’t verify the bug, you need to keep writing before you solve it. Solving something that is not verified is really “not solving” the problem. You may luck out … but then again, you may not.


There are no werewolves, so there are no silver bullets. There is no such concept as a free lunch. Don’t run around with hammers looking for nails.The point here is microservices is one approach, but don’t assume it comes without any costs.

As a person who has focused on external APIs for various companies (start ups all the way to Fortune 50 companies), I love the idea of taking the same concepts inside the Enterprise. I am also intrigued by the idea of introducing more granularity into solutions, as it “forces” the separation of concerns (something I find so many development shops are bad at). But I also see some potential gotchas when you go to Microservices.

Here are a few suggestions I would have at this point in time:

  1. Plan out your microservices strategy and architecture as if you were exposing every service to the public. Thinking this way pushes you to figure out deployment and versioning as a product rather than a component in a system.
  2. Think about solving issues up front. Figure out how you are going to monitor your plethora of services to find problems before they become huge issues (downtime outside of SLAs, etc). Put together a disaster recovery plan, as well as a plan to failover when you can’t bring a service back up on a particular node.
  3. In like mind, plan out your deployment strategy and API management up front. If you are not into CI and CD, plan to get there, as manually pushing out microservices is a recipe for disaster.
  4. Create a template for your microservices that includes any pieces needed for logging, monitoring, tracing, etc. Get every developer in the organization to use the template when creating new microservices. These plumbing issues should not requiring solving again and again.

Peace and Grace,

Twitter: @gbworld


SOA Lessons: Don’t Put All Your Eggs in one Basket

When we think of the term “don’t put all of your eggs in one basket” in terms of IT, we more often think of not relying on a single vendor solution for everything. This is certainly a valid way to look at the term, but in SOA it also means we have to look at services and not just layers.

To put this in perspective, I am currently working on an external API solution for a large E-Comm company. In the current External API, which was released before I moved into the group, all of the services are found in a single “wrapper” project. On the plus side, it makes it easier to configure the services, as all of the configuration is found in a single web.config file. There are many negatives, however, which I will detail in this post.

Single Point of Failure

This is the largest negative on production. Recently, the solution was released and we discovered a vendor was overloading one of the services. The issue was discovered during deployment and manifested itself as failures in a wide variety of services. The reason for the systemic failure is the worker process for the failing service was the same worker process for all of the services. When the process was brought down, all suffered.

Fortunately, we knew which service was live and most likely to be the culprit, so we were able to alleviate the problem. If this was not the case, we would have had to pour through logs to determine the offender, which could have taken hours with the current state of affairs. That is unacceptable.

If you take this scenario a bit further and assume a coding error that kills the worker process rather than a load issue, imagine the troubleshooting. Especially if the exceptional case causing the failure was intermittent and due to the lack of a patch on the server in question.

One good reason to separate out the service endpoints is so you can move the services into their own process space. This is not mandatory in normal working conditions, but if there is an issue, and instrumentation and monitoring is not catching the culprit, separating the service into its own process accomplishes two things:

  1. Helps identify the point of failure
  2. Protects other services from failure

Both of the above are worthwhile reasons to separate each service into its own process space as a rule, and then making exceptions based on needs.

Single Point of Deployment

This negative is similar to the last negative, but focuses more on what it takes to fix a service that is in error. If all of the services exist in a single project, then all must be deployed at the same time, even if only one service has changes.

I think anyone reading this can see why this is a negative, but it is more insidious than moving pieces you should not have to move. Every time a software project is deployed, there is a potential for error. There are a variety of reasons why this is so, but we all have experienced a deployment that created a buggy condition. Often times it was code we did not even change where the bug shows up and often due to circumstances that have nothing to do with the code.

If you deploy a service that has no updates, and cause bugs due to mistakes in deployment, you have done a great disservice. Worse, the disservice was completely avoidable if you had simply segregated out the service so it did not require deployment with the other services.

Cross Contamination

When numerous service endpoints are added to a single WCF project, they will have different contracts, but often end up with the same binding rules and behaviors. This is normal and to be expected. Eventually, however, one service will be found to require more time to complete its work or a large payload (either request or response or both?) and edits will have to be made to the configuration file.

Changes that are specific to the contract and service endpoint are unlikely to have consequences outside of the service in question. But since at least parts of the configuration are shared, changes can impact the “sharing” services in a negative way. And, since we deploy at the same time, but often only test the service being updated, these negative consequences are often discovered by our consumers rather than our test team.

The Point

The point of all this is we can easily avoid the negatives explained in this post by making sure every UI project (in this case every WCF service) has its own project. Sharing a single solution is not an issue, although you will also want to make solutions for the individual services, as well. NOTE: Solutions are points of organization and a single project can exist in any number of solutions.

Separating out the services has a small negative of forcing you to make changes to multiple projects for “universal” binding and behavioral standards, but this negative is outweighed by the negatives of placing all of the services in a single project.

Peace and Grace,

Twitter: @gbworld

The Importance of Domain Models

Disclaimer: This post focuses on theory, so there are some bits that are oversimplified.

My current assignment is consulting a large eCommerce company on their external APIs. For those not well versed in this concept, the short story is I am focused on the architecture of web services exposed to the outside world. One of my primary tasks is determining project layout of the service solutions.

One of the newer services was set up as a pass through from an internal service. The objects exposed by the underlying service were merely bubbled up through the external service and exposed “as is” to the outside world. This post deals with why this is a mistake and how a service should be modeled. While my focus is external services, the methodology presented should work as a reference model for many types of development.

Domain Models

First, we need a working definition of a domain. In Wikipedia, a domain is defined as follows:

Domain A sphere of knowledge, influence, or activity. The subject area to which the user applies a program is the domain of the software

In an eCommerce company, we have many domains. Some deal with knowledge of the product, others with finance and yet others with shipping and receiving. Each of these domains has different needs. As an example, let’s look at a customer as he relates to various applications in an eCommerce scenario:

Ordering – Need full customer information, including address(es), phone number(s)  and payment information. Not all of this information is mandatory in all steps of the process, however, so we might even represent the customer in a variety of domain models. As a real world example, it is unlikely we will need the customer’s payment information anywhere but the payment process.

Warehouse – There is no need for any customer information to pick items. We do need the order represented so the shippers can identify the customer, but the customer is optional

Shipping – A customer, as far as shipping is concerned, is a name, a shipping address and a phone number (so the carrier can call if there are issues).

Reading the examples above, it should be clear that a domain does not equal an application. In the space of browsing and ordering, for example, the application likely crosses multiple domains. This is not important for this article, and we are not going to create a domain model. Now that you understand the domain and the domain model, let’s focus on the core issue for this exercise.

Core Issues

The core of this article is when an internal service is exposed to the outside world without creating a separate domain model. The model looks like the following diagram:


On the surface, this does not seem like such a bad idea. After all, why should I not expose the same objects to the outside world? The main reasons are:

  1. The outside clients are not insulated from changes to the internal change. In more OO terms, the inner model is not encapsulated and there is no abstraction. Encapsulation is more important here than abstraction, of course.
  2. Internal services often contain wording that does not apply to the outside world, or ‘companyisms”
  3. Internal needs are different from external needs and there are often internal “secrets” that should be kept internal only.

Insulation (Encapsulation)

When you design a service that passes through objects from an internal service, there is no insulation from change. Every time you change the objects in the internal service, the external service must be changed to match the internal service. Done properly, the service is versioned and the original service is properly deprecated so it is kept alive until external clients can switch their programming to fit the new service.

This does not sound like such a big deal over all, but consider most internal services are not thought through with the same rigor as external services. This means there are generally more changes to an internal service than the outside world will tolerate.

By adding a domain model to the external service, you can protect the external client from excessive versioning. As an example, if the only changes on the internal service are name changes, you can simply adjust the mapping to your domain model and release a new version on top of the old service. The internal service team need not keep an old version of the service alive, as the external client consumes the external domain model (not quite true, but that is a topic for another post).


Every company I have consulted for has some type of verbiage unique to the organization. In eCommerce, for example, I often see {CompanyName}Price for the standard retail price for the company, which is generally lower than the manufacturer’s standard retail price. In more extreme measures, company acronyms become part of the internal service model.

Regardless of the nature of the companyism, the outside consumer should not have to understand the unique vernacular of your company to consume the service. By creating a domain model which is focused on external clients, you eliminate companyisms from the objects served.

Company “Secrets”

Not all of the items served by the internal service are truly company secrets, but there are many things an external consume need not know. These items should “disappear” from the domain model used by the external service.

In an internal cart service, you may pass back private consumer data so the applications can expose this information to employees who need to know. If you expose this same service to the outside world, these “secreats” are also available. If, instead, you create a domain model, you can consciously remove these items from your model and avoid exposure to the outside world.

Pattern for an External Service

Now let’s look at a pattern for an external service. The pattern is modeled using the Core as Application methodology, as shown in the diagram below:


The idea here is the business logic is the application and Persistence (where you save data) and Experience (where the user reads or manipulates data) are not part of the application, per se, but extensions so work can get done. As with all analogies, there are some flaws in this view, but it is a very useful paradigm for developing high quality (testable?) applications. I will cover Core as Application, and its benefits, in another post.

As many are not familiar with the Core As Application model above, I will take you through an “n-tier” set up of the diagram. A couple of notes before going this direction.

  • The UI tier has been renamed experience, like the Core as Application model. This is done to illustrate the fact that our user’s are not necessarily human, as in a service. What we are focused on in this layer is the experience of the user, human or computer.When we focus on interface, we often see this as strictly I/O, which is a mistake.
  • The data tier is renamed persistence, like the Core as Application model. The reasoning is the application should not change based on where you are persisting your data. The only thing that should change when you switch from a file based persistent methodology to a database is your access method.
  • The business tier has been renamed core, like the Core as Application model. This is due to the fact that this tier contains the application. Switching out the experience layer does not change the application, nor does switching to another persistence method. Of the two concepts here, separating persistence (database) from core is generally easier than experience.

Now that we have the “rules”, here is a basic service diagram, with a rather simple implementation.


Here is an explanation of the parts of the diagram. The items in blue are items created via attributes in WCF and the Service Proxy, in green, is “free”, as it is created by .NET when you create a service reference.

  • Service Proxy – A proxy for a service created when the service is queried. The proxy comes for free in .NET, as it is created when you include a service reference or use the command line tool. NOTE that this assumes a SOAP service in the .NET Framework 4.0 or earlier.
  • Proxy Façade – The façade is utilized as a bit of an insulation against change in the methods of the underlying service.Adding a façade makes it easier to create an interface, which both a) makes it easier to test and b) allows for substitution for future versions of the service. When added to the factory and mapper, you end up with a very flexible model for change.
  • Mapper – The mappers job is to translate from the service model to the domain model. While this functionality can be encapsulated into the façade, keeping it separate makes it easier to unit test the façade. Using generics, the mapper can allow for flexible input while keeping the domain model objects as its primary output. This allows for easy change while adhering to a single interface.
  • Domain model – The model for the business logic, if any, contained in the external service.
  • Core – In a mapping type of service, the core is a pass through only. In an aggregation type of service, the core will “marry” output from multiple services. I find it unwise to have your external service serve as an aggregate. While it adds “weight” to the total solution, it is better to have another internal service handle the aggregation and have the external service focus on simple mapping of objects. This moves the complexity away from the external client and reduces the likelihood of erroring out  your consumers.
  • Data Contract – Standard WCF adorned objects. Per Microsoft’s suggestion, I use the IDataSurrogate interface to further separate the domain model from the representation to the outside world. The contract can be placed in the service layer, rather than core.

    NOTE: I am currently working on using the MVC pattern on the service, which may be more easily accomplished by bubbling up the data contract to this project. I will post findings later.

  • Operation Contract and Service Contract – These describe the behavior of the service and are a standard part of WCF. I will generally separate these into a separate assembly, although it is not mandatory, as changes here will alter the service.
  • The service – Standard WCF service. As a very simple “User Interface’ (experience piece), the WCF service contains very little code. This may be altered slightly if the service project contains the contracts.

If you feel multiple facades are needed throughout the history of the service, you can add a factory in front of the façade and use it to determine which façade (and thereby service) are used in a particular version.

I will cover the solution layout in more detail a future post.


Internally, most organizations go through a bit of chaos. In an ideal world, the developers in one department would treat the developers in consuming department as clients, this is often the exception in many organizations. Breaking changes in software are all too common and often break the software created by other departments.

The outside world is far less tolerant than your internal departments. When you break software, they may forgive you once, but eventually the decision will be made to go to a competitor. For this reason, you simply cannot alter service contracts without considerable thought.

To insulate the outside world from your changes, you must use a strategy that keeps the outside interfaces as static as possible despite the internal chaos. Using a domain model and data contracts, along with proper facades and factories, you can insulate the outside world from internal chaos and company specific naming. You also have the ability to remove any “secret” information.

The pattern applied in this article is not a “one size fits all” but rather a very good start for most situations. Feel free to alter the pattern to fit your specific needs. Also feel free to comment, as patterns should always be open for improvement and discussion.

Peace and Grace,

Twitter: @gbworld
Microsoft MVP Page: http://mvp.support.microsoft.com/profile/Beamer