Windows 10: Getting around “Buy Wi-Fi” on membership networks

So I found a nice “feature” in Windows 10 that really annoys the crap out of me. It is the buy Wi-Fi feature. It seems like it might be a nice feature at times, like when you are somewhere, cannot find free Wi-Fi (rare) and need to connect to get some business done.

The problem with the feature is networks you have membership on, that Microsoft has contracted with, default to “buy wi-fi”, as in the follow capture


That’s right, with a Marriott Platinum status, sitting in a room in the hotel, the default is to buy Internet. This is really irritating.

Getting On Without Paying (This case “paying twice”)

So this really bugged me until I got to playing with it. If the Network is one you should be able to get on, you can click on the network and the following pops up. Click on the link circled below:


Once you click “other options from the provider”, the logon screen for the hotel appears. In this case, it shows I am still connected from last night. The network will boot me off in a bit.


If you select view services, you will then see your purchase options for the network.


And there is one more caveat. To get on the network, if you don’t know this already, you have to download the Windows 10 Wi-Fi app from the Microsoft Store just to see the options. Until you download the app, you don’t even have the option to explore other options with the provider.

Personally, I think this is a fail and Microsoft needs to rethink. It would be better, in my opinion to have “other options” be the buy option, as this irritates the crap out of users. Oh, wait, Windows is the only option when you have certain careers?

Peace and Grace,


Microservices in .NET part 2: Silver Bullets and Free Lunches?

I have spent the better part of this week digging into micro services and I love the idea. Here are some benefits I see that can be realized by using a microservices approach:

  1. The granularity level allows developers to stay on a single context while solving a problem. This singularity of focus makes it much easier to dig into the details of a specific object or set of objects. In many cases, it will quickly expose poor planning in an organization and provide rationale for fixing the process. As an example, a service that has a lot of churn is probably one that was not planned out well (not talking finding additional uses, but rather having to rebuild contracts and re-architect the service on a regular basis)
  2. The services are simple,making it easy to maintain the individual components.
  3. The methodology forces a good separation of concerns.
  4. You can use the best tool for the job rather than stick to a single platform, programming language or paradigm, etc. This is a dual edged sword, as I will uncover a bit later.
  5. Isolated solution problems can easily be fixed without much impact. If you find your employee microservice has an issue, you can fix it without deploying the entire solution.
  6. Working with multiple services enables the use of concepts like Continuous Integration (CI) and Continuous Delivery (CD). This is also dual edged, as you almost have to go to a full blown CD implementation to use microservices architectures. I will hit this later, as well.
  7. You can get multiple teams working independently of each other. This was always possible, of course, as I have pointed out in my Core As Application blog entries (one here), if you will take the time to plan out you contracts and domain models first. (NOTE: In 2010, I was told “you cannot design contracts first, as you don’t know all of the requirements up front”. By 2011, I proved this wrong by delivering using a contract first approach, both ahead of time and under budget – a bit of planning goes a long way).
  8. Systems are loosely coupled and highly cohesive.

This is just a short list of the benefits. The problem I see is everyone is focusing on the benefits as if we have finally found the silver bullet (do you have werewolves in your organization?) and gained a free lunch. This article focuses on some of the downsides to microservices.


As you move to smaller and smaller services, there are many more parts that have to be deployed. In order to keep the solutions using the microservices up and running, you have to be able to push the services out to the correct location (URI?) so they can be contacted properly by the solutions using them. If you go to the nth degree, you could conceivably have tens, if not hundreds, of small services running in an Enterprise.

As each service is meant to be autonomous, this means you have to come up with a strategy for deployment for each. You also have to plan for high availability and failover. And there has to be a solid monitoring and instrumentation strategy in place. In short, you need all of the pieces of a good API Management strategy in place, and you need to do this BEFORE you start implementing microservices. And I have not even started focusing on how everything is wired together and load balancing of your solutions. On the plus side, once you solve this problem, you can tune each service independently.

There is a burden on the Dev side that needs to be tackled up front, as well. You need to start thinking about the requirements for monitoring, tracing and instrumenting the code base and ensure it is part of the template for every service. And you have to plan for failure, which is another topic.

As a final point on this topic, your dev and ops team(s) must be proficient in the combined concept of DevOps to have this be a success. Developers can no longer pitch items over to Ops with a deployment document. They have to be involved in joint discussions and help come up with plans for the individual services, as well as the bigger solutions.

Planning for Failure and Avoiding Failure

Services will fail. Looking at the plethora of articles on microservices, it is suggested you use patterns like circuit breakers (avoid hitting a failed service after a few attempts) and bulkheads (when enough “compartments” are “under water”, seal the rest of the solution from the failure point). This is a fine avoidance strategy, but what if the service failing is a critical component to the solution working?

Not mentioned in the articles I have read is a means of managing services after failure. Re-deploying is an option, and you can make redeployment easier using quickly set up virtual environments and/or containers, but what if reaching that portion of the network is the point of failure. I would love to hear comments on this next idea: Why not look at some form of registry for the services (part of API Management, similar to UDDI, etc.) or a master finder service that exists in various locations and that all applications are aware of. Another idea would be to include as part of the hyper-media specification backup service locations. But either of these solution further exacerbates the reliance on DevOps, creating even more need for planning solutions and monitoring released solutions.

I don’t see microservices working well without a good CI/CD strategy in place and some form of API management. The more I look into microservices the more I see the need for a system that can discover its various points on the fly (which leads me back to the ideas of using a finder service or utilizing hypermedia to inform the solutions utilizing microservices where other nodes exist).

Contracts and Versioning

When you develop applications as a single Visual Studio solution (thinking in terms of projects and not products?), you have the ability to change contracts as needed. After all, you have all of the code sitting in front of you, right? When you switch to an internal focus on services as released products, you can’t switch out contracts as easily. You have to come up with a versioning strategy.

I was in a conversation a few weeks ago where we discussed versioning. It was easy to see how URI changes for REST services required versioning, but one person disagreed when I stated changes to the objects you expose should be a reason for versioning in many instances. The answer was “we are using JSON, so it will not break the clients if you change the objects”. I think this topic deserves a side bar.

While it is true JSON allows a lot of leeway in reorganizing objects without physically breaking the client(s) using the service, there is also a concept of logical breakage. Adding a new property is generally less of a problem, unless that new element is critical for the microservice. Changing a property may also not cause breakage up front. As an example, you change from an int to a long as planning for the future. As long as the values do not exceed the greatest value for an int, there is no breakage on a client using an int on their version of the object. The issue here is it may be months or even years before a client breaks. And finding and fixing this particular breakage could be extremely difficult and lead to long down times.

There are going to be times when contract changes are necessary. In these cases, you will have to plan out the final end game, which will include both the client(s) and service(s) utilizing the new contract, as well as transitional architectures to get to the end game without introducing a “big bang” approach (which microservices are said to help us avoid). In short, you have to treat microservice changes the same way you approach changes on an external API (as a product). Here is a simple path for a minor change.

  1. Add a second version of the contract to the microservice and deploy (do not remove the earlier version at this time)
  2. Inform all service users the old contract is set to deprecate and create a reasonable schedule in conjunction with the consumers of the microservice
  3. Update the clients to use the new contract
  4. When all clients have updated, retire the old version of  the contract

This is standard operating procedure in external APIs, but not something most people think about a huge deal when the APIs are internal.

I am going to go back to a point I have made before. Planning is critical when working with small products like microservices. To avoid regular contract breakages, your architects and Subject Matter Experts (SMEs) need to make sure the big picture is outlined before heading down this path. And the plan has to be conveyed to the development teams, especially the leads. Development should be focused on their service when building, but there has to be someone minding the shop to ensure the contracts developed are not too restrictive based on the business needs for the solutions created from the services.

Duplication of Efforts

In theory, this will not happen in microservices, as we have the individual services focusing on single concerns. And, if we can imaging a world where every single class had a service (microservices to the extreme?) we can envision this, at least in theory. But should we break down to that granlular a level? I want to answer that question first.

In Martin Fowlers article, he talks about the Domain Driven Design (DDD) concept of a bounded context. A bounded context is a grouping of required state and behavior for a particular domain. Martin Fowler uses the following diagram to show two bounded contexts.

In the diagram above, you see some duplication in the bounded contexts in the form of duplication of customer and product. In a microservices architecture, you could conceivably move customer and product to its own service and avoid the duplication, but moving a concept out simply to avoid duplication is not the best motivation in all cases. If you can also make a customer or product business capability, I would wholeheartedly support this approach, but it is not always the case (another side bar).

When would you not want to separate out Customer and Product? In short, when the domain concept of these objects is different. In the sales context, a customer contains sales specific information, including terms of sale (net 60 days?) and other items that may not exist in a support context. If we are talking a company that ships products (as opposed to a service only company), we can add other contexts, like shipping and warehousing, that have radically different customer views. In the warehouse, a customer is completely unimportant, as they are focused on pulling orders. From a shipping standpoint, a customer is a name, a shipping address and a phone number. No need for any additional information. A customer microservice either spits out a complete object, allowing the services to filter (not a great idea from a security standpoint) or it provides multiple interfaces for each of the clients (duplication of efforts, but in a single service rather than multiple consumers and/or services, so it does not avoid duplication). A product can also be radically different in each of these contexts.

My advice to starting out is starting with bigger contexts and then decomposing as needed. The original “microservice” can act as an aggregator as you move to more granular approaches. Example of transitional states from the contexts above.

  1. Discover of duplication in the sales and support microservices leads to a decision the customer and product should be separate services
  2. New customer and product services created
  3. Sales and support services altered to use the new product and customer services
  4. new version of sales and support services created to avoid serving product and customer information
  5. Clients altered to use the new services as well as the sales and support services

This is one idea of migration, as we will discover in the next section.

Where do we Aggregate?

If we go back to the bounded context discussion in the last section, we see the need to aggregate. The question is where to we aggregate? You need to come up with a strategy for handling aggregation of information. I am still groking this, so I am not offering a solution at this time. Here are some options I can see.

Option 1 – Client: In a full microservices architecture, the client may be responsible for all aggregation. But what if the user’s client is a mobile application. The chattiness of a microservice architecture is hard enough to control across your internal multi-GB network infrastructure. Moving this out onto the Internet and cell networks expounds the latency. I am not saying this is a bad option in all cases, but if you opt for this approach, more focus on the latency issue is required from your mobile development team. On a positive note, if the client application can handle single service failures gracefully, you reduce the likelihood of a single point of failure.

Option 2 – Final service boundary: In this approach, the outermost service contacts the many microservices it requires to get work done and aggregates for the client. I find this more appealing, in general, for mobile clients. And it reduces the number of “proxies” required for web, simplifying the user interface client. As a negative, it creates a single point of failure that has to be handled.

Option 3 – Aggregation of dependencies: In this approach, the higher level service (closer to the client) aggregates what it requires to work for the client. At first, I liked this option the best, as it fits a SOA approach, but the more and more I read about the microservices idea, the more I see this as a potential combination of the bad points of the first two options, as you introduce numerous points of failure on the aggregate level while still potentially creating multiple points for latency in your client applications. I still think this might be something we can think through, so I am providing it.

If you can think of other options, feel free to add them in the comments.

Testing One, Two, Three

I won’t spend a lot of time on testability, but the more moving parts you have to test, the harder it is. To understand why this is so, create an application fully covered in unit tests at every level, but developed by different teams, and then integrate. The need for integration testing becomes very clear at this moment. And what if you are not only integrating multiple libraries, but multiple discrete, and very small, services. There is a lot of discipline.

I find the only reasonable answer is to have a full suite of unit tests and integration tests, as well as other forms of testing. To keep with the idea of Continuous Integration, only the smaller tests (unit tests) will be fired off with each CI build, but there will be a step in the CD cycle that exercises the full suite.

There is also a discipline change that has to occur (perhaps you do this already, but I find most people DON’T): You must now treat every defect as something that requires a test. You write the test before the fix to verify the bug. If you can’t verify the bug, you need to keep writing before you solve it. Solving something that is not verified is really “not solving” the problem. You may luck out … but then again, you may not.


There are no werewolves, so there are no silver bullets. There is no such concept as a free lunch. Don’t run around with hammers looking for nails.The point here is microservices is one approach, but don’t assume it comes without any costs.

As a person who has focused on external APIs for various companies (start ups all the way to Fortune 50 companies), I love the idea of taking the same concepts inside the Enterprise. I am also intrigued by the idea of introducing more granularity into solutions, as it “forces” the separation of concerns (something I find so many development shops are bad at). But I also see some potential gotchas when you go to Microservices.

Here are a few suggestions I would have at this point in time:

  1. Plan out your microservices strategy and architecture as if you were exposing every service to the public. Thinking this way pushes you to figure out deployment and versioning as a product rather than a component in a system.
  2. Think about solving issues up front. Figure out how you are going to monitor your plethora of services to find problems before they become huge issues (downtime outside of SLAs, etc). Put together a disaster recovery plan, as well as a plan to failover when you can’t bring a service back up on a particular node.
  3. In like mind, plan out your deployment strategy and API management up front. If you are not into CI and CD, plan to get there, as manually pushing out microservices is a recipe for disaster.
  4. Create a template for your microservices that includes any pieces needed for logging, monitoring, tracing, etc. Get every developer in the organization to use the template when creating new microservices. These plumbing issues should not requiring solving again and again.

Peace and Grace,

Twitter: @gbworld

A foray into micro services in .NET

Last month, I was given an assignment to attend a workshop with one of our clients. The workshop, as it was called, turned into something more like a hackathon, with various groups attempting to POC some concepts. The big talk of the workshop was micro services.

If you are not familiar with micro services, you should take a look at Martin Fowler’s article. The idea behind micro services is simple. You take an application (which might be a service) and then break it down into smaller, very focused services.

As an example, consider eCommerce. You have online store, which uses a service to handle the various actions required for a customer to order items and have them shipped and tracked. If you examine this from a micro-services perspective, you will separate out the customer shopping cart, order processing, fulfillment and tracking into different services. As you dig deeper you might even go further.

In this entry, I want to dig into micro services a bit deeper, from a Microsoft .NET perspective, but I first want to give some foundation to the micro services concept by understanding how we got here in the first place. That is the main focus of this entry, although I will talk about a high level implementation and some pros and cons.

NOTE: For this article, as with most of my posts, I will define the word “application” as a program that solves a business problem. From this definition, it should not matter how the application interacts with users, and you should not have to rewrite the entire application to create a different presentation (for example, windows to web). I use define the word “solution” as a particular implementation of an application or applications.


When we first learn to create applications, the main focus is on solving the problem. This is most easily accomplished by creating a single executable that contains all of the code, which is commonly called a monolith (or monolithic application).


Monoliths are most often considered bad these days, but there are pluses and minuses to every approach. With the monolith, you have the highest performance of any application type, as everything runs in a single process. This means there is no overhead marshalling across application processes spaces. There is also no dealing with network latency. As developers often focus on performance first, the monolith seems to get an unfair amount of flack. But there are tradeoffs for this level of performance.

  1. The application does not scale well, as the only option for scale is vertical. You can only add so many processors and so much memory. And hardware only runs so fast. Once you have the most and fastest, you are at maximum scale.
  2. There is a reusability issue. To reuse the code, you end up copying and pasting to another monolith. If you find a bug, you now have to fix it in multiple applications, which leads to a maintainability issue.

To solve these issues, there is a push to componentizing software and separating applications into multiple services. Let’s look at these two topics for a bit.

Componentizing for Scale (and Reusability)

While the monolith may work fine in some small businesses who will never reach a scale that will max out their monolith, it is disastrous for the Enterprise, especially the large Enterprise. To solve this “monolith problem”, there have been various attempts at componentizing software. Here are a list of a few in the Microsoft world (and beyond):

COM: COM, or the Component Object Model is Microsoft’s method for solving the monolith problem. COM was introduced in 1993 for Windows 3.1, which was a graphical shell on top of DOS. COM was a means of taking some early concepts, like Dynamic Data Exchange (DDE), which allowed applications to converse with each other, and Object Linking and Embedding (OLE), which was built on top of DDE to allow more complex documents by embedding types from one document type into another. Another common word in the COM arena is ActiveX, which is largely known as an embedding technology for web applications that competed with Java applets.

DCOM and COM+: DCOM is a bit of a useless word, as COM was able to be distributed rather early on. It was just hard to do. DCOM, at its base level, was a set of components and interfaces that made communication over Remote Procedure Call (RCP) easier to accomplish. DCOM was largely in response to the popularity of CORBA as a means of distributing components. COM+ was a further extension of DCOM which allowed for distributed transactions and queueing. COM+ is a merger of COM, DCOM libraries and Microsoft Transaction Server (MTS) and the Microsoft Message Queue service (MSMQ).

.NET: If you examine Mary Kirtland’s early articles on COM+ (one example here), you will read about something that sounds a lot like .NET is today. It is designed to be easily distributed and componentized. One problem with COM+ was DLL Hell (having to clean out component GUIDs (globally unique identifiers) to avoid non-working applications). .NET solved this by not registering components and returning to the idea of using configuration over convention (a long fight that never ends?).

Competing technologies in this space are EJB and RMI in the Java world and CORBA and the CCM as a more neutral implementation. They are outside of the scope of this document.

The main benefit of technologies are they make it easier to reuse code, reducing maintainability, and allow you to more easily distribute applications, providing greater availability and scalability. You can still choose to build monolithic applications, when they make sense, but you are not tied to the limitations of the monolith.


One issue with many of the component technologies was they tightly coupled the consumer to the application, or the “client” to the “server”. To get away from this, a group of developers/architects at Microsoft came up with SOAP (Yes, I know Don Box was actually working for DevelopMentor as a contractor at Microsoft and Dave Winer was heading UserLand software and only partnering with Microsoft on XML and RPC communication, but the majority of the work was done there). With the creation of SOAP, we now had a means of creating applications as services, or discrete applications, which could focus on one thing only. That is sounding a lot like micro services … hmmmm.

In the Microsoft world, SOAP was used to create “web services”. The initial release of .NET in 2002 allowed one to create services using HTTP and SOAP as ASMX services (a type of document in ASP.NET) as well as create faster RPC type services with Remoting (generally these were internal only, as it was hard to play with them outside of the Enterprise, due primarily to more tight coupling to technologies, much less play with them outside of the Microsoft world).

By 2006, with the release of .NET 3.0, Microsoft had merged the concepts of Remoting and ASMX web services in the Windows Communication Foundation (WCF). You could now develop the service and add different endpoints with ease, allowing for an RCP implementation and a web implementation off the same service. WCF really came to fruition about a year later with the release of .NET 3.5.

The latest service technology to enter the fray is Representational State Transfer (REST). In the Microsoft world, REST was first introduced in REST toolkit, an open source project. From the standpoint of an official Microsoft release, it was released as the WCF Web API. It was a bit kludgy, as WCF works in a completely different paradigm than REST, so the project was moved over the web group and is now implemented on top of ASP.NET MVC as the ASP.NET Web API.

Methodologies, Technologies and Tools

One more area we should look at before moving to micro services are methodologies, technologies and tools used to solve the monolith “problem”.


The first methodology that gained a lot of acceptance in the Microsoft world was the n-tier development methodology. The application was divided into UI, Business and Data tiers (note: today Microsoft calls this Presentation, Middle and Data tiers), with the push towards separating out the functionality into discrete, purposed pieces. Below is a typical n-tier diagram for an application.

Around 2010, I realized there was a problem with n-tier development. Not so much with the methodology, as it was sound, but with the way people were viewing the models. Below is an n-tier model of an application:

The issue here is people would see an application as the merger of presentation, business logic and data. Is this true? Let’s ask a couple of questions.

1. If you create a web application with the exact same functionality as a windows application is it 2 applications or one? In implementation, it was often treated as two, but if it was logically and physically two applications, you were duplicating code.

2. If you want to add a web service to expose the functionality of the application, do you rebuild all n tiers? If so, should you?

My view is the application is the part that solves business problems, or the core of the functionality. You should be able to change out where you persist state without changing this functionality. I think most people understand this as far as switching out one database server for another, like SQL Server to Oracle. When we get to the logical level, like changing out schemas, a few get lost here, but physical switches with the same schema is well known and most find it easy to implement. Switching out presentation is what most people find more difficult, and this is generally due to introducing logic other than presentation logic in the presentation portion of the solution.

NOTE: The naming of the tiers in 3-tier and n-tier architecture has changed over time. Originally, it was common to see UI, Business and Data. The illustration above calls the tiers Client, Middle and Data. I have also seen Presentation, Application and Persistence, which is closer to the naming I would use in my models.

To better illustrate this concept, in 2010 I came up with a methodology called Core as Application (seen in this article on the importance of domain models). In this model the core libraries ARE the application. The libraries for presentation can easily be switched out, and have responsibility for shaping the data for presentation.


The “Core as Application” model requires you start with your domain models (how data is represented in the application) and your contracts, both for presentation and persistence. Some of the benefits of this model are:

  1. Focusing on domain models and contracts first pushes the team to plan before developing (no, there is no surefire way to force people other than tasers ;->). This is a good representation no matter what methodology or model you use, but it is critical if you are going to have multiple teams working on different parts of a solution.
  2. You can have multiple teams working in parallel rather than relying on completing one layer prior to working on another. You will have to resynchronize if any team determines the contract needs to be changed, but the amount of rework should be minimal.

When you look at “Core as Application” from a SOA standpoint, each service has its own core, with a service presentation layer. The persistence for higher level applications is the individual services. This will be shown a bit later.


We have already covered some technologies used to solve the monolith problem. COM and .NET are good examples. But as we move even deeper, we find technologies like the ASP.NET Web API is useful. The technologies do not force us to not create monoliths, as even Microsoft pushes out some examples with data access, as LINQ to SQL, in a controller in an MVC application. But they do get us thinking about creating cohesive libraries that serve one purpose and classes that give us even more fine grained functionality.


We also have tools at our service. Visual Studio helps us organize our code into solutions and projects. The projects focus on a more fine grained set of functionality, helping us break the monolith up. If we follow best practices, our solutions end up with more projects, which can easily be broken down into individual micro services. Speaking of which, this a good time to segue.

Onward to Micro Services

Micro services are being presented as something new, but in reality, they are nothing more than Service Oriented Architecture (SOA) taking to the nth degree. The idea behind micro services is you are going extremely fine grained in your services. It is also stated you should use REST, but I don’t see REST as an absolute requirement. Personally, I would not aim for SOAP as there is a lot of extra overhead, but it is possible to use SOAP in micro services, if needed. But I digress … the first question to answer is “what is a micro service?”

What is a Micro Service?

I am going to start with a more “purist” answer. A micro service is a small service focused on solving one thing. If we want to be purist about it, the micro service will also have its own dedicated database. If we were to illustrate the order system example we talked about earlier, using the “Core as Application” model, the micro-services implementation would be something like the picture below.


If you want to take it to an extreme, you can view micro services the way Juval Lowy viewed in 2007 (video gone, but you can read the description). His idea was every single class should have a WCF service on top of it. Doing so would create a highly decoupled system possible, while maintaining a high level of cohesiveness. Micro services does not dictate this type of extreme, but it does recommend you find the smallest bounded context possible. I will suggest a practical method of doing this a bit later.


One difference between Juval Lowy’s suggested “one service per class” and micro services is the suggested service methodology has changed a bit. In 2007, WCF was focused on SOAP based services. Today, REST is the suggested method. Technically, you can develop a micro service with SOAP, but you will be adding a lot of unnecessary overhead.

Below are Martin Fowler’s characteristics of a micro service:

· Componentization via services – This has already been discussed a bit in the previous two paragraphs. Componentization is something we have done for quite some time, as we build DLLs (COM) or Assemblies (.NET – an assembly will still end in .DLL, but there is no Dynamic Link Library capability for COM inherently built in without adding the Interop interfaces). The main difference between an assembly (class library as a .dll file) and a micro service is assembly/dll is kept in process of the application that utilizes it while the micro-service is out of process. For maximum reuse, you will build the class library and then add a RESTful service on top using the ASP.NET Web API. In a full micro services architecture, this is done more for maintainability than need.

· Organized around business capabilities – This means you are looking for bounded contexts within a capability, rather than simply trying to figure out the smallest service you can make. There are two reasons you may wish to go even smaller in micro services. The first, and most reasonable, is finding a new business capability based on a subset of functionality. For example, if your business can fulfill orders for your clients, even if they are not using your eCommerce application, it is a separate business capability. A second reason is you have discovered a piece of functionality can be utilized by more than one business capability. In these cases, the “business capability” is internal, but it is still a capability that has more than one client. Think a bit more about internal capabilities, as it may make sense to duplicate functionality if the models surrounding the functionality are different enough a “one size fits both (or all)” service would be difficult to maintain.

  • Product Not Projects – This means every service is seen as a product, even if the only consumer is internal. When you think of services as products, you start to see the need for a good versioning strategy to ensure you are not breaking your clients.
  • Smart Endpoints and dumb pipes – I am not sure I agree completely with the way this one is worded, but the concept is the endpoint has the smarts to deliver the right answer and the underlying delivery mechanism is a dumb asynchronous message pump.
  • Decentralized Governance – Each product in a micro services architecture has its own governance. While this sounds like it goes against Enterprise concepts like Master Data Management (MDM), they are really not opposites at all, as you will see in the next point
  • Decentralized Data Management – This goes hand in hand with the decentralized governance, and further illustrates the need for Master Data Management (MDM) in the organization. In MDM, you focus on where the golden record is and make sure it is updated properly if a change is needed. From this point on, the golden record is consulted whenever there is a conflict. In micro services, each micro service is responsible for the upkeep of its own data. In most simple implementations, this would mean the micro service contains the golden record. If there are integrated data views, as in reporting and analytics, you will have to have a solution in place to keep the data up to date in the integrated environment.
  • Infrastructure Automation – I don’t see this a mandatory step in implementing micro services, but it will be much harder if you do not have automation. This topic will often start with Continuous Integration and Continuous Delivery, but it gets a bit deeper, as you have to have a means of deploying the infrastructure to support the micro service. One option bandied about on many sites is a cloud infrastructure. I agree this is a great way to push out microservices, especially when using cloud IaaS or PaaS implementations. Both VMware and Microsoft’s Hyper-V solutions provide capabilities to easily push out the infrastructure as part of the build. In the Microsoft world, the combination of the build server and release management are a very good start for this type of infrastructure automation. In the Linux world, there is a tool called Docker that allows you to push out containers for deployment. This capability also finds its way to the Microsoft world in Windows Server 2016.
  • Design For Failure – Services can and will fail. You need to have a method of detecting failures and restoring services as quickly as possible. A good application should have monitoring built in, so the concept is nothing new. When your applications are more monolithic, you more easily determine where you problem is. In micro services, monitoring becomes even more critical.
  • Evolutionary Design – I find this to be one of the most important concepts and one that might be overlooked. You can always decompose your micro services further at a later date, so you don’t have to determine the final granularity up front. As a micro service today can easily become an aggregator of multiple micro services tomorrow, There are a couple of concepts that will help you create your microservices we will discuss now: Domain Driven Design and Contract First Development.
Domain Driven Design

Domain Driven Design (DDD) is a concept formulated by Eric Evans in 2003. One of the concepts of DDD that is featured in Fowler’s article on micro services is the Bounded Context. A bounded context is the minimum size a service can be broken down into and still make sense. Below is a picture from Fowler’s article on Bounded Contexts.

When you start using DDD, you will sit down with your domain experts (subject matter experts (SMEs) on the domain) and find the language they use. You will then create objects with the same names in your application. If you have not read Eric Evans Domain Driven Design book, you should learn a bit about modeling a domain, as it is a process to get it right.

NOTE: you are not trying to make your data storage match the domain (ie, table names matching domain object names); let your database experts figure out how to persist state and focus on how the application uses state to create your domain objects. This is where Contract First comes into play.

Contract First Development

Once you understand how your objects are represented in your domain, and preferably after you have a good idea of how the objects look in your presentation projects and your data store, you can start figuring out the contracts between the application, the presentation “mechanism” and the persistent stores.

In general, the application serves its objects rather than maps them to presentation objects, so the contract focuses on exposing the domain. The presentation project is then responsible for mapping the data for its own use. The reason for this is presenting for one type of presentation interface will force unnecessary mapping for other types. As an example, I have seen n-tier applications where the business layer projects formatted the data as HTML, which forced writing an HTML stripping library to reuse the functionality in a service. Ouch!

How about the persistence “layer”? The answer to this question really depends on how many different applications use the data. If you are truly embracing micro services, the data store is only used by your service. In these cases, if storage is different from the domain objects, you would still desire spitting out the data shapes that fit the domain objects.

How to Implement Microsoft Services, High Level

Let’s look at a problem that is similar to something we focused on for a client and use it to determine how to implement a micro services architecture. We are going to picture an application for an Human Resources (HR) services company that helps onboard employees We will call the company (and hope it is not a real company?).

Any time a new employee comes on board there is a variety of paperwork that needs to be completed. In the HR world, some of the forms have a section that is filled out by the employee and another that is filled out by a company representative. In the micros services application architecture, we might look at creating a micro service for the employee portion and another for the employer portion. We now have a user facing application that has two roles and uses the two services to complete its work. Each service surrounds a bounded context that focuses on a single business capability.


Over time, we start working with other forms. We find the concept of an employee is present in both of the forms and realize employee can now represent a bounded context. The capability may only be internal at this time, so there is a question whether it should be separated out. But we are going to assume the reuse of this functionality is a strong enough reason to have a separate capability. There is also a possibility the employer agent can be separated out (yes, this is more contrived, but we are envisioning where the solution(s) might go).


If we take this even further, there is a possibility we have to deal with employees from other country, which necessitates an immigration micro service. There are also multiple electronic signatures needed and addresses for different people, so these could be services.


In all likelihood, we would NEVER break the solution(s) into this many micro services. As an example, addresses likely have a slightly different context in each solution and are better tied to services like the employee service and employer services rather than a separate service that is able to keep context

Pros and Cons

While micro services are being touted in many articles as THE solution for all applications, silver bullets don’t exist, as there are no werewolves to kill in your organization. Micro services can solve a lot of pain points in the average Enterprise, but there is some preparation necessary to get there and you need to map it out and complete planning before implementing (I will focus on another article to go into more detail on implementation).


One of the main pros I see mentioned is the software is the right size with micro services. The fact you are implementing in terms of the smallest unit of business capability you can find means you have to separate the functionality out so it is very focused. This focus makes the application easier to maintain. In addition, a micro services architecture naturally enforces tight cohesion and loose coupling. Another benefit is you naturally have to develop the contract up front as you are releasing each service as a discreet product.

You also have the flexibility to choose the correct platform and language on a product by product (service by service) basis. The contract has to be implemented via standards for interoperability, but you are not tied into single technology. (NOTE: I would still consider limiting the number of technologies in use, even if there is some compromise, as it gets expensive in manpower having to maintain multiple technologies).

Micro services will each be in control of their own data and maintain their own domains. The small nature of the services will mean each domain is easy to maintain. It is also easier to get a business person focused on one capability at a time and “perfect” that capability. It goes without saying micro services work well in an Agile environment.

Micro services architecture also allows you the ability to scale each service independently. If your organization has adopted some form of cloud or virtual infrastructure, you will find it much easier to scale your services, as you simply add additional resources to the service.


Micro services is a paradigm shift. While the concepts are not radically different, you will have to force your development staff to finally implement some of the items they “knew” as theory but had not implemented. SOLID principles become extremely important when implementing a micro services architecture. If your development methodologies and staff are not mature enough, it will be a rather major paradigm shift. Even if they are, a shift of thinking is in order, as most shops I have encountered have a hard time viewing each class library as a product (Yes, even those who have adopted a package manager technology like NuGet).

There is a lot of research that is required to successfully implement a micro services architecture.

· Versioning – You can no longer simply change interfaces. You, instead, have to add new functionality and deprecate the old. This is something you should have been doing all along, but pretty much every developer I have met fudges this a good amount of the time. It is internal, so I can fix all of the compiler errors, no problem. This is why so many shops have multiple solutions with the same framework libraries referenced as code. You should determine your versioning strategy up front.

· URI Formatting – I am assuming a REST approach for your micro services when I include URI formatting, but

· API Management – When there are a few services, this need will not be as evident. As you start to decompose your services in to smaller services, it will become more critical. I would consider some type of API management solution, like Layer 7 (CA), Apigee, or others, as opposed to building the API management or relying on an Excel spreadsheet or internal app to remind you to set up the application correctly.

· Instrumentation and Monitoring – Face it, most of us are bad at setting the plumbing, but it becomes critical to determine where an error occurs in a micro services architecture. In theory, you will know where the problem is, because it is the last service deployed, but relying on this idea is dangerous.

· Deployment – As with the rest of the topics in this section, there is a lot of planning required up front when it comes to deployment. Deployment should be automated in the micro services world. But deployment is more than just pushing applications out, you need to have a rollback plan if something goes wrong. In micro services, each service has its own deployment pipeline, which makes things really interesting. Fortunately, there are tools to help you with build and release management, including parts of the Microsoft ALM server family, namely build server and release management.

In short, micro services are simple to create, but much harder to manage. If you do not plan out the shift, you will miss something.


This is the first article on micro services and focuses on some of the information I have learned thus far. As we go forward, I will explore the subject deeper from a Microsoft standpoint and include both development and deployment of services.

Peace and Grace,

Twitter: @gbworld

Solving the Microsoft Mahjong Classic Puzzle–January 9th

This particular puzzle was a bit of a b*tch. Despite being flagged easy, it takes quite a bit of time, largely due to the placement of a few tiles that requires a strategy that is a bit out of the norm. Here is my guide to solving the puzzle using the tranquility tile set. (If you are using another tile set, you can still use this post, but will have to go by position alone.

First, here is how it looks as you start.


To explain the solution, I need to do 2 things:

  1. Identify the tiles in shorthand
  2. Grid off the board

Identifying the Tiles

In Mahjong, there are a group of tiles that can only match an identical tile; these are 3 suits (wheels (or dots), bamboo (or sticks) and numbers (or cracks)). the four winds (north, south, each and west) and a set of three dragons (red, green and white).

There are also two suits in which you can match any of the tiles: Seasons (spring, summer, winter, fall) and flowers (plum, orchid, bamboo and mum).


The wheels are easy to identify. They have a dot or wheel. You count the number of wheels to determine the number of the tile. I label these with a lower case w after the number: 1w – 9w.


Bamboo is also easy, as you count the number of bamboo fronds. The exception is the 1 of bamboo, which looks like a bird. I label these with a lower b after the number: 1b – 9b.


Numbers are a bit different unless you read Chinese. Here is how to identify them. 1, 2 and 3 of numbers have 1, 2 and 3 vertical lines across the middle of the top of the tile. The 4 looks like an arabic W. 5 is the most complex symbol. It has an up and down stroke followed by what appears to be a lower case h with a cross at the top and underline. 6 is a stick figure without the body segment. 7 appears a bit like a t, 8 like a broken upside down v and 9 is like a cursive r. I label numbers with a lower case n: 1n – 9n. Some of the numbers are shown below.



Winds are easy to identify, as they have the initial of the wind direction in black in the upper left corner of the tile: N, S, E, W.


There are three dragons: red, green and white. The red dragon is a stick with a circle colored red. The green dragon appears like a green bush or a green flame. And the white dragon is a blue square. I have a red and blue dragon shown below, with the numbers all ready on the graphic:



Seasons are found on green tiles. You can match any season with any other season.




Gridding off the board

On this board, there are 5 rows from top to bottom, starting from the 2 of wheels (or dots) in the middle top. and ending with row5 which has a south wind on the far left side. I am going to label these rows 1 through 5.


There are 13 rows, which I label A through M. Leaving the rows in place, here are the columns.


Notice that some of the tiles, like the 2 of bamboo (2b) on row 4 (partially hidden by the West wind (Wwind). These are half columns, which I label by adding an X. In particular, this is column Ax. Here is a shot showing the rows and the half columns


And here is rows, columns and half columns.


Designating Position

To designate a position, I also need to indicate the level of the tile. On this board, the 4 tiles on row 4 (starting with a flower and ennding with the West wind are at level 4. The North wind located at 5-M is at level 2 (there is one tile underneath it.

My location designator consists of row, column and level, with hyphens in between. The North wind at the lower left, for example, is 5-M-2. I will add the tile name (or type in flowers or seasons) in front of the location. Here is a map of a few positions, so you can understand the system.



As pointed out on the Mahjong FaceBook page, you have to focus on the bottom row. This is normal when you are solving a Mahjong puzzle, as you should focus on three areas.

  1. long lines
  2. Tall stacks
  3. tiles that cover more than one tile

Not focusing on these areas is bad. This particular puzzle is a bit insidious as there are some tiles that only have one pair and are located under other tiles that have only one pair. This makes it hard to clear off the board. Here are the areas to look out for:

North on top of north

2 6 of bamboo (only pair of them in game) on top of row 5

Two white dragons on row 5 at level 2

Two red dragons on the bottom level of row 5:

The general strategy here (as mentioned on the Facebook page) is to start row 5, level 3 from the left side, then level 2 from the right side and level 1 from the left side. This is not 100% true, as you will see.

The first step is to start clearing row 5, level 3, from the left. There is a 5n here at 5-B-3. You can see another 5n under one of the flowers. The designation for clearing this tile, fitting the scheme is 5n 2-J-2 5-B-3. To get to this tile, you have to clear flowers, which are designated flowers 2-Ix-3 3-G-4.


You then continue on the left side of row 3 for 2 more moves. Here are the first four moves. Counter intuitive moves are marked with a starStar.

Move Tile Location 1 Location 2
1 flowers 2-Ix-3 4-G-3
2 5n 2-J-2 5-B-3
3 Star 1n 2-C-2 5-C-3
4 Wd 1-J-1 5-D-3

The 3rd move, above, is counter intuitive. You would think you should clear off 5-B-3 with 3-K-2, but I found this move makes the puzzle unsolvable. At the end of the three moves, you have a board that looks like this:

It should be clear enough how to use the moves table now. Here are the moves to clear off the rest of the top of row 5. Note that we will start from the left side now (purple), as we will with row 2 (blue). Also note this table has some rows that say ALL, followed by a number of pairs. This means clear everything of that type off the board.

Move Tile Location 1 Location 2
5 Nwind 4-H-4 5-M-2
6 Wwind ALL (3 pairs)  
9 Rd 4-F-3 4-I-4
10 4n 2-Hx-4 4-E-2
11 Swind ALL (2 pairs)  
13 EWind 3-I-4 5-k-3
14 9n 1-I-2 4-Bx-2
15 8n 3-B-3 4-Bx-1
16 4b 1-I-1 4-Ax-2
17 2w 2-Gx-3 4-G-3
18 3w ALL (2 pairs)  
20 7w 3-Ax-1 4-L-1
21 5b 1-H-1 3-I-3
22 Mug Nwind 2-G-2 5-I-3
23 4w 3-Ex-1 5-H-3
24 6w 2-F-1 2-K-1
25 5w 2-J-1 4-J-3
26 3b 4-I-3 5-G-3
27 1n 3-K-2 5-F-3
28 Wd 3-J-2 5-E-3

Move 22 (beer icon) is also critical, as there is a north on a north in position 2G (at level 2 and 1). Exchanging either of these north winds for the north wind at 4-G-2 At this point in time, the top of row 5 should be cleared, as shown below:

The next step is to clear off both row 4 and 5. Because of the 6 of bamboo (6b), you have to start row 5, level 2 from the right. Here are the steps to clear row 5, level 2 (purple) and much of row 4, level 2 (blue). 

Move Tile Location 1 Location 2
29 1b ALL (2 pairs)  
31 9w 3-G-3 4-G-3
32 Nwind 1-G-1 4-G-2
33 7b 5-K-1 5-L-2
34 Ewind 3-Kx-1 4-Ax-1
35 6n 2-A-2 3-G-2
36 2b 2-Hx-2 3-Jx-1
37 8b 3-Bx-1 3-I-2
38 Swind 3-H-2 4-H-2
39 1w 2-I-2 5-J-2
40 Seasons ALL (2 pairs)  
42 3n 4-J-2 5-H-2
43 9b 4-K-2 5-G-2
44 4n 3-G-2 5-F-2
45 Gd 2-H-2 5-E-2
46 6b 5-B-2 5-D-2
47 1w 3-Ix-1 5-C-2

Here is the board after these moves:

You should now be able to solve this without my help, but here are the moves.

Move Tile Location 1 Location 2
48 flower 2-H-1 5-B-1
49 2w 1-G-1 4-K-1
50 Gd 4-J-1 5-C-1
51 2n ALL (2 pairs)  
53 8w 4-H-1 5-E-1
54 3n 5-F-1 5-K-1
55 Rd 5-G-1 5-J-1
56 7n 5-H-1 5-I-1
57 3b 3-Hx-1 4-G-1
58 2b 2-A-1 3-Gx-1

Hope this helps.

Peace and Grace,

Twitter: @gbworld

Analyzing the Michael Brown Shooting

I got a chance to go through all of the witness testimony the grand jury released (minus any embedded in the Grand Jury transcripts). I did this to see whether this was a clear cut case of a police murder or something more justified.  Everything in here is taken from my notes, and I have linked numerous documents so you can read the originals. You can find the bulk of the documents linked from this page. You can also find the same on USA Today (which also includes public videos entered into evidence) and the New York Times site.


Below are a couple of maps of the area. The first was taken from the web and shows some locations. This is the map that gave me the idea to determine where people were.

And here is my graphic, taken from Google maps, with the locations of various witnesses marked (these are keyed by the same numbers shown in the post). NOTE: X in a box is where Michael Brown’s body was at the end of the altercation. The blue dot is where Officer Wilson’s car was stopped.



  • Numbers are color coded per the description in the next section. Red are those that appear to be false (either outright lying or recollecting from false testimony) and orange are suspect in at least part of the testimony.
  • I am not completely sure the order of witnesses 30, 34 and 40. Assume 30 to be closer as he states the cop was between him and the boy and 34 burned out, which would have been easier near the parking lot entrance. It is known both were in their approximate locations, just the ordering is suspect.
  • The exact positions of 38 and 45 are also unknown, so I have placed them near the dumpster.
  • Witness 44 started back a bit further and likely ended a bit closer than his marker, based on his testimony. The marker is an approximation of mid-point and good enough for this exercise (to see why examine discredited witnesses).
  • I have no clue which apartment witness 57 heard the shooting from or which stop sign witness 64 was at. There are no stop signs at any street where a witness would have visibility to the shooting, at least not on the current views on Google maps.
  • The question marks indicate where I think the witnesses were based more on testimony than their statement of location. In other words, they are educated guesses.
  • The phone symbol is where one of the videos was shot that appears to fit Wilson’s side of the story. There is also one from Piaget Crenshaw (witness 43 – correction on this; witness is male, Piaget’s location very close, however) and apparently one from witness 41, although I cannot find it.

Wilson Interview

NOTE: Prior to the incident, Wilson had gone to a call in the Sharondale apartments, northeast of the point of the altercation. This is on track 324, page 1 of the radio log. Wilson got a call of a “stealing” going on. The information on the suspect is on page 7 (track 358) of the radio log. On track 364, page 8, he puts himself back into service (10-8). He then asks 25 or 22 if they need assistance.

NOTE: The stealing info is interesting, as people have stated there is no way Wilson could have known about the shoplifting, as the owner had not called anything in.

Saw two black males walking down Canfield. He told them to get on the sidewalk and they said they were almost to their destination. He asked “but what’s wrong with the sidewalk” and they answered “fuck what you have to say”. Wilson then called in for backup (track 369, page 9) and backed up his vehicle. Officer 25 answers the call (log page 10, track 372).

After backing up about 10 feet, Wilson tries to open the door and Brown says “What the fuck you gonna do?” and shuts the door on him, leg out. He then tries to reopen the door, Brown says something, closes the door and starts punching through the window. The first blow is a glancing blow. Brown then hands the cigarillos to Johnson and hits him again. Wilson thinks of using his mace, but it is on the wrong side, so he grabs his gun and aims it at Brown, who puts his hand over the top of the gun and pushes it towards Wilson.

Wilson states a shot was fired in the car and Brown ran. He exits the vehicle, states “shots fired. Send me more cars”. Brown stopped, turned, made a grunting noise and the most aggressive face he had ever seen and then charged with his hands in his waistband. Wilson ordered him to stop and then fired multiple times. Wilson yelled at him to get on the ground again. Brown, still hand in waistband, charged Wilson, and Wilson fired until Brown went down. He was about 8-10 feet away at this time. He then called “send me every car we got and a supervisor”. About 15-20 seconds later, two marked cars show up and started blocking off everything.


Dorian Johnson Story

Dorian Johnson was the closest to Brown at the time of the initial contact.

August 9


Witness Interviews

These are taken in order of when they were interviewed. Closest to the event are usually more accurate, although this is not always the case. I have included markings to indicate which witnesses are not credible and explain the reasoning later.

* (asterisk)  is used to indicate a witness I find not credible. red is used to indicate a witness that is likely lying or recollecting things rather than going by actual observations (I believe 2 are outright lying, while 2 are filling in way too many gaps), while orange is suspect, used for a witness that has some useful details, but some that are not likely to be true (either filled in some how (recollected?) or added due to personal feelings). Some of the witnesses in these categories state the shooting was execution style, while other take the other extreme, and some are in the middle.

Same day as shooting

* Witness 16 (8/9/2014 2:19 PM and 9/11/2014): Located approximately 150 feet from the police car in a third floor apartment (assume this was also 2973 Canfield Court instead of building next door, as she could not see where Brown was in the street from the other building). She heard a “skirr”, or car “screech”  and looked out her bedroom window and saw Brown tussling with Wilson in the car. She then heard two shots, one which she assumes is the one that hit the building across the street, and saw officer Wilson get out of the car with a  red face, which she assumed was from anger. She then moves from the bedroom to the living room and sees Michael Brown coming towards Wilson with his hands up, at his shoulders, and then shooting him at 2-3 yards. Under a second interview with the FBI, she describes getting her purse, then walking to the living room, then goes and gets her cell phone so she can record the scene (she has a video after the shooting that can be viewed) and then has to open the blinds in the living room. She then admits it took too long to get back and she only saw the tussle and then Brown laying in the street.

* Witness 43 (8/9/2014 4:12 PM): In father’s apartment, upstairs.Not sure where this apartment is, as the description does not get into location. I believe this is Piaget Crenshaw. NOTE: She later mentions seeing Brown with his hands up, although that is missing from her initial testimony (she looked away) and she has a lot of details. Can confirm not Piaget from end of video.

She heard screaming from Brown as he was reaching into Wilson’s car. Wilson tried to taze Brown, but failed. He then pulled out his gun and tried to shoot Brown, but failed. Brown started running and he shot him in the back. She looked away, as she thought it was over, but then heard 4-5 more shots and went back to see Brown lying in the street.

Witness 12  (8/9/2014 (twice, first at 4:40 PM, second 4:45 PM), 8/13/2014 and at least 1 media interview): This is Michael Brady, based on testimony. Location, first floor at 2973 Canfield Court, approximately 150 feet from the police cruiser. He was located Heard an altercation and came to the window to see Brown and Wilson fighting through the window of the police cruiser (“arms goin’ through the window”). Heard 2 shots from in car and then Brown runs. Wilson then shoots two more shots (8/13/2014) or 6 shots (8/9/2014) or 3-4 shots (8/13/2014). Brown then turns around curled up and walks towards Wilson, with his hands “probably up” and Wilson shoots 3-4 more times (most interviews) or 4-5 times (MSNBC interview). Brady’s testimony is all over the place, but the main thread stays consistent. Later testimony seems more recollection than remembrance.

Witness 25 (8/9/2014 4:59 PM): In kitchen on phone when fiancée called him to the window. Saw Wilson waking after a running Brown shooting steadily at him. Brown then turned to Wilson and was shot 2-3 more times.

Witness 22 (8/9/2014 5:06 PM): Located in a third story apartment, not sure of exact location. Sleeping, but awakened by the first two shots (from inside the car?). Boyfriend told her to come to the window. Saw Brown grabbing stomach or side and Wilson just kept firing. Then saw Brown kneeling in the street with his hands up and Wilson shooting him 3 times.

Witness 32 (August 9, 2014 6:40 PM): In car right behind Wilson, about 10 feet behind, heading west on Canfield. Saw Brown and Johnson in the middle of the street walking towards. Wilson said something to him (assumed, not heard) and Brown tussled with him in the car and a shot rang out. Brown backed up and ran east on Canfield, past the witness, while Johnson tries to get in the car. Both Brown and Wilson pass by car and through rearview mirror sees Brown fall after 3 more shots. Then takes off quickly, driving partially over the grass.

Witness 40 (August 9th journal entry): Heading east on Canfield very close to scene (since states Johnson told him to leave after shooting, most likely either in front of or behind witness 34, putting him 5-15 feet from the police car). White person, evidenced by first line in journal entry stating “need to understand the black race better so I stop calling blacks n*******s and start calling them people.

Saw police car back up and almost hit Brown and Johnson. Brown hit the door and looked pissed. Saw Brown hit car door with belly (this fits Johnsons description of Wilson trying to open the door). Johnson hits mirror on police car. Saw Brown in the window and then saw Brown running. Wilson got out of car with his left hand on his face and right hand grabbing his gun. Wilson yelled something at Brown, although the witness could not hear it as other people were already yelling outside. Brown turned around at Wilson (“with attitude”) and started running at Wilson, head down, like a football player. Wilson shot 3 times, but Brown continued advancing. Wilson fired 2 more shots, and Brown backed up. Mentions Johnson told him to leave.

August (not same day)

Witness 10 (8/11/2014): Working in the building at 2973 Canfield Court – same building as witness 12 and 16); not sure of exact distance, as he was taking things in and out of the building (120-150 feet). Saw Brown wrestling with Wilson through the window of the car and a shot went off. Brown took off running (witness thought Wilson had been murdered). Brown turns and makes a movement and then starts charging Wilson. Wilson shoots 4-6 shots. Brown stops and Wilson stops firing. Brown then charges Wilson again and Wilson fires 4-5 more shots and Brown falls to the ground.

Witness 14 (8/12/2014 and 9/24/2014): Located in apartment that looks straight down Canfield towards West Florissant (same building as Witness 10, 12 & 16?). Saw two young men walking down street approached by police cruiser. Then saw Brown tussling with Wilson while he was still in the truck. Heard a shot and then saw Brown run 25-30 feet from the car. Wilson gets out about 3 seconds later and comes around car to passenger’s side. Yells “stop” at Brown. Brown takes 2-3 steps and was shot 2-3 times. Brown lifts hands to shoulder level (pleading?) with his palms out at his rib cage level, and continues to step forward. Wilson yells stop again and then “lets loose” on Brown.

Witness 30 (August 13, 2014): Was driving east on Canfield, heard 3-4 shots and saw Brown hit in the leg. Brown turned around and lifted at least one hand. Witness thought he had a gun. Wilson then shot Brown.

* Witness 35 (August 13, 2014): Located in 3rd floor window, nearest apartment building to Northwinds apartments (2909 Canfield Drive), which puts him more than 200 feet from where Brown ended up in the street and close to 300 feet from Wilson’s car. Witness self-described as Brown’s best friend and Johnson’s cousin. Says he was on the phone when the first shot rang out (missed fight in car). Ran to window and saw Brown on knees with blood rushing from shoulder or rib cage. He heard Brown tell Dorian to run for his life and then saw Wilson get out of his truck and shoot him in the head. At this time, Brown was no more than 5 feet from the police cruiser. Also mentions saw Brown shot 4 times, then heard him pleading for his life and saying “don’t shoot me”, heard 4 more shots as he ran downstairs to the scene. Stated Wilson shot his 10 times at close range.

* Witness 48 (8/14/2014, also phone call follow up): In the middle row of a van on Canfield turning left at a stop sign (had to be Coppercreek Road to be near a stop sign and able to see any of the altercation). This would put her at least 200 feet from the police car and farther from where Brown was shot. Heard 2 shots and then sees Brown run. Brown turns to Wilson and runs at him, hands balled up. Wilson yells stop at least three times. Wilson shoots three times, stops and then shoots Brown again.

Witness 46 (8/15/2014): In car heading east on Canfield. Pulled over for police car to pass, as he/she had an outstanding warrant for tickets. If the story is true (doubtful), he was right in front of the shooting location. He saw “two mens” talking to a cop (who just passed him not too long ago?). The boys were heading east, the cop was heading east. The cop then stuck his gun out the window and shot Brown, and ended up shooting about 2 shots. After the shooting stop, the witness got out of the car and asked Wilson if he could help and Wilson told him “get the f*ck on”.

* Witness 45 (8/16/2014): Was near the dumpster at 2973 Canfield Court (In front of Brady’s (witness 12) apartment), about 75-100 feet from police car. A police car with 2 officers approaches Brown and Johnson, drives off and then backs up and almost hits them. Brown then goes to the window “either to defend hisself or to give up”. Saw Wilson grab Brown by the collar and then the first shot rang out. Brown backed up with a blood spot on his shirt. Wilson fired the second shot as he started to run. Brown turns around and puts his arms up and Wilson guns him down. The second officer than gets out of the vehicle.

* Witness 44 (8/16/2014 and 9/25/2014): Walking on the south side of Canfield, near the leasing office. Approximately 150 feet from the police car. Saw Wilson back up the car, hitting Brown and Johnson and then Brown running around to the driver’s side of the car for about 15 seconds. Brown then fights Wilson through the window, while Johnson runs away. A shot rings out and Brown backs up and checks himself, and then runs away. The gun falls to the ground. Brown runs about 10-12 steps and then turns to face Wilson, who is now out of the car (number of steps changes a lot in the interviews). Brown puts his hands up  like “I’m done” or “arrest me” and Wilson shoots him 7 or 8 times, 6 of them while Brown is backing up.

* Witness 42 (8/16/20148/16/2014):: In apartment on Canfield (very close to shooting, so most likely 2943 or 2947 Canfield. This would put him somewhere between 30-50 feet of where Michael Brown was shot, although some of the scene would have been obstructed depending on which apartment he was in.

He was on the phone when he heard the first shot (in the car?). Saw Wilson running after Brown and then shoot him in the back. Brown turned around and threw his hands in the air. Wilson unloaded his gun in Brown’s direction, unknown number of shots (“it was gruesome”). Shooting started about 20 feet apart, but ended with an execution style shooting with Brown on his knees (less than an arm’s length) while Michael Brown was saying “don’t shoot”. While Brown on the ground, he continued to shoot the body.

* Witness 38 (8/16/2014): Was near the dumpster at 2973 Canfield Court (In front of Brady’s (witness 12) apartment), about 75-100 feet from police car. Saw Wilson talking to Brown and Johnson. They move on and Wilson puts the car in front of them. The witness continues to the dumpster, but hears two shots. Thought it was just a warning shot, but then he heard another 6-7 shots. Ducked down and when he came back up the police were on the scene putting out crime scene tape.

* Witness 37 (8/18/2014): Driving east on Canfield, near leasing office, 125-150 feet from police car. When he came on the scene he saw Wilson dragging Brown into the car by his shirt. Hears a shot and Brown breaks lose. Brown gets 10-15 feet, with Wilson shooting 3-4 shots from his vehicle. Brown then turns around and puts his hands up. Wilson then casually walks up to Brown and shoots him 203 times at point blank range. He then stands over the body and fires 203 shots into it while it is on the ground. In total, 10 shots were fired.

* Witness 41 (August 26, 2014, also here): On second floor “between buildings” in a building with the west side facing Copperfield, so it could only be 2973/2975 Canfield Court (Brady’s building) or 2977/2999 Canfield Court. After hearing the first two shots (from inside car?), the witness went to the lot. Saw Wilson get out of the car and shoot Brown execution style, while on knees at close range. Wilson fired 9 shots so fast the gun had to be a full automatic,which the witness thought was illegal for police use. In later testimony the witness mentioned he/she saw Brown from the back only and saw a bruise on Office Wilson’s face.


Witness 34 (9/3/2014): Located in car heading west on Canfield, very near the police car (10-20 feet). Saw Brown and Wilson tussling for about 2 minutes. Wilson was holding Brown’s shirt and Brown threw “a couple blows”. Wilson was leaning towards center console. Shot rang out and Brown ran. He ran about 203 car lengths and then placed his hand on a brown car (as if winded?). Wilson came out and seemed “shaken”. Saw Brown turn around and come at Wilson and he was shot a couple of time. The witness then took off quickly: “I turn around right there in front of the officer truck and I burn out”.


Witness 57 (11/6/2014): Located in apartment in area (not sure where, but it does not matter). Heard shot and then saw tussling in car. Thought it was over until other shots were fired. Got cell phone and went outside to record the scene.

Witness 62 (11/6/2014): Treated officer Wilson at NW Healthcare. Wilson came in complaining of jaw pain. Stated he was punched twice and requested x-rays of jaw to see if it was broken. Had redness in the area of his jaw and some bruising. Also had scratch marks on the back of his neck.

Witness 64 (11/11/2014): At stop sign when he/she heard 2 shots. Saw brown running and appeared to be hit by shot in legs or hip. Turns to Wilson with hands up at chest level. Wilson about 8 feet away when shot.

Other Documents

I have examined a few other documents:

  • Medical record has diagnosis reason 95909 (other/unspecified injury to the face and neck) and 78492 (jaw pain). The primary diagnosis code 920 (contusion of face/scalp/neck except eye). TRIAGE: Chief complaint “he needs x-rays he was hit in the face a couple of times”. MEDICAL SCREENING: Skin pink, dry and warm (evidence of injury). Pain index of 6 out of 10. Prescriptions: Naprosyn 500mg tablets, 20, for pain
  • Wilson’s drug screen – negative on all counts (also see this)
  • Brown’s drug screen – Positive for cannabinoids (DELTA-9-THC and 11-NOR-DELTA-9-THC-COOH in blood) and 11-HYDROXY-THC and 11-NOR-DELTA-THC-COOH in urine. I personally find this uninteresting, as it was marijuana, but you can draw your own conclusions.


Here are my thoughts on this, organized into different headings:


First there are some witnesses I find problems with. Some are major (I would think they are outright lying) and some are suspect, colored to belief system or similar (of some value, but should be taken with a grain of salt. I have stated why I think the witnesses should be discredited, for discussion.

Discredited witnesses – Either outright liar’s, as in the case of witness 46 and most likely witness 35, or someone who has pieced together so much it is difficult to accept any of the testimony at face value.

  • Witness 35 was too far to have heard anything Brown through a closed window (other witnesses closer to the scene did not hear anything said from either party. He also describes the shooting taking place next to the cruiser (within 5 feet) and the shooting at point blank range. Since the resting place of Brown was much farther and a point blank shooting would have left powder on more than just Brown’s hands, this does not fit the facts. He also would have missed the past 4 shots as he was heading downstairs and had no angle, so could not have seen the “don’t shoot” hands up when it happened. More than likely his grief over losing a friend caused him to make up most of his testimony. He is warned numerous times not to lie under oath.
    • Witness 44 describes being so blind he could not find his friends in his high school café without his glasses. Admits not wearing contact lenses or glasses on the day of the shooting. He is located about 150 feet from the police cruiser and farther from where Brown hit the ground. More than likely he saw a blur and filled in the story with what he heard talking to other people.
    • Witness 45 describes seeing two officers, so it is more likely he/she came up after the investigation started, which was shortly after the shooting took place (within minutes). The witness also describes blood on the shirt, after Wilson shot Brown from in the car.
    • There is no credibility whatsoever to witness 46’s statement. He has Brown and Johnson heading the wrong direction. He has the number of shots completely wrong. Has Wilson sticking his hand out the window to shoot Brown and then sets himself up as an attempted hero, where he is foiled by a cop telling him to “get the f*ck on”.

Suspect witnesses – may be something of value, but the story has evidence of coloring to fit a certain viewpoint.

  • Witness 16 admits she did not see anything beyond the altercation in the police cruiser, as she was retrieving her phone and purse and did not see the shooting. Her testimony on the screech is useful, as it corroborates the car backing up before the fight, as is the information about the volleys of shots (matches other witnesses). The description of anything beyond the altercation is recollection, filled in by other people’s stories.
  • Witness 37 describes an execution that could not have taken place based on physical evidence. About the only thing that can be taken at face value is there was a tussle in the car. He is also warned numerous times not to lie under oath.
  • Witness 40 is not suspect in his shading of whether Brown had “attitude”. For this reason, I cannot accept the “charging like a football player”, especially since there is only one other witness that has Brown running at Wilson (there are many saying he was advancing). The events, overall, match other witnesses, but the racial beginning of the journal entry suggests a bit of bias in how the scene was reported. It is also a personal journal and not witness testimony.
  • Witness 41 could not have seen much of the altercation from the second floor landing (examining angles on Google maps) and would not have seen Brown on the street on his knees once on the lot level, as there were cars in the way. The physical evidence refutes a close shooting and had the witness seen Brown from back only, would have never seen him on his knees, as he would have been. About the only thing I find interesting is the description of the bruises on Wilson’s face, but these would have had to have been witnessed later than at the time of the shooting, as it takes times for bruising to come up.
  • Witness 42 describes an execution style shooting at very close range, which is refuted by physical evidence. He also mentions Wilson finishing Brown off while he was lying face down in the street, which would not match any of the wounds. It is also clear, from the interview, that he knows about the media stories being told.
  • Witness 43 has embellished the story to include a taser, which Officer Wilson did not carry. This makes the rest of the testimony suspect. Since she did not witness the actual shooting, her testimony, beyond stating Wilson and Brown were struggling in the car, is rather useless.
  • Witness 48 was over 200 feet from the scene and in the middle row of a passenger van, making it hard to see everything that was going on. Also describes Brown charging Wilson, as did witness 40


Here is what happened, as best as I can piece together from witness testimony, with some weight towards those that I cannot find issues with on my first reading.

Michael Brown and Darian Johnson are walking down the middle of Canfield Court, heading east to Darian Johnson’s apartment on the third floor of 2909 Canfield Drive. We know Brown and Johnson were recently at the Ferguson Market and Liquor, located at 9101 W Florissant Avenue, where Brown shoplifted some cigars and pushed the owner when confronted. It is approximately 11:40 AM. on August 9, 2014.

Officer Darren Wilson is driving a Ferguson Police SUV west on Canfield, having left a domestic disturbance in the Northwinds apartments. He comes on the teens and tells them to get out of the middle of the road. Some witnesses (not included in the testimony) have stated he said “get the f*ck out of the road” or “get the f*ck on the sidewalk”. Wilson drives by the two and then one of them says something back to Wilson (this is assumed by some witnesses). Wilson backs up the truck quickly (creating a “screech” or “skirr” described by witness 16), pulling up next to Brown and Johnson and attempts to open the door. The door bumps Brown (Johnson’s testimony). Brown then starts fighting the officer. Wilson is leaning away from Brown, but holding onto his shirt. Brown hits Wilson at least 2 times (witness testimony as well as medical reports and testimony). Two shots then ring out (one that hits Brown on the thumb at minimum (autopsy), and one that hits a window or window frame across the street – 2964 Canfield Court).

Brown and Johnson then run. Johnson ducks behind a car driving west on Canfield after trying to get in to the car (witness 34) , while Brown continues down the middle of the street. Wilson gets out of the car shaken from being hit and heads after Brown, albeit a bit slower than Brown. He shoots at Brown at this time, more than likely causing the forearm wound.

Brown then turns and faces Wilson. Wilson tells him to stop but Brown advances. The speed of the advance is unclear. It is also unclear whether Brown advances and is shot and then advances and is shot again (two volleys) or Brown advances once and is shot. He has his hands up at chest or shoulder level, but it is unclear whether the hands are in a surrender position or balled up.

After the altercation, there is an onsite investigation, after which Wilson is taken to Northwest Healthcare with evidence of bruising on his face.

Additional bits

Here are some additional bits I have found.

Wilson’s Testimony

Here are some images that corroborate Wilson’s testimony.

NOTE: There is evidence of the fight in pictures (taken at NW Healthcare):

There is also evidence of a shot fired inside the car

here is another from the scene that shows the same piece sticking out

There is also forensic evidence of blood in the vehicle, as well as blood splatters

From one side of the body, he appears to have his hand in his waistband, as stated by Officer Wilson

From the other, his hand is out, so perhaps it was up, or up at chest level

Video of Scene

I find this one interesting, as it supports Wilson’s claim Brown rushed him (fast forward to 6:30 and start listening as one witness talks about Brown coming back at Wilson (after someone asks “why his body come this way though”)

Next thing I know he coming back towards unintelligible(?). The police had his gun drawn.

More repeated about 7:12 bout Brown coming at Wilson.


It is hard to drive a firm conclusion, as the testimony is all over the place. Here are things that are clear.

  1. Brown and Johnson were walking down the center of Canfield
  2. Wilson stopped and confronted them
  3. They walked off and Wilson backed up
  4. There was an altercation in the vehicle and Brown was in the vehicle at some point, not just with a hand around his throat outside the police car
  5. Brown struck Wilson at least once on the left cheek
  6. A shot was fired in the car that struck Brown
  7. Brown got clear and ran off
  8. Wilson pursued
  9. Brown turned to face Wilson before being fatally shot

Some witnesses state Brown rushed Wilson, per Wilson’s testimony, including one overheard on a video shot just after the event. Others state he had his hands up. It is obvious some witnesses have changed their stories over time, while others outright lied about events they saw.

Brown’s hand wound appears to be at close range as the official autopsy states there is particulate matter in the hand, indicating a shot at close range to the hand. Brown’s private autopsy does not have this finding, but states there is no stipple.

It is also  highly likely at least one shot hit Brown as he ran away (forearm), although those stating he had his hands up offer another potential story.

Robo-SPAM Recruiters

To try to weed through job SPAM (I get about 150 recruiter emails a day), I set up a rule in email to send them back an email.

I used the following ruleset:

  1. I only responded to emails marked with the urgent flag
  2. I only responded if the words in the subject had titles I knew were off target (like “entry level”, “mid level”, etc)

This is the email I sent (yes, it is probably a bit snarky):

Thank you for your interest in me filling your current job opportunity. You are receiving this email because your email was marked urgent and had certain words in it that do not fit my resume, like “entry level”, “mid level”, “developer”, “analyst”, “engineer” or “local only”. My current job position is senior architect who specializes in pre-sales, enterprise architectural assessments and new client setups. I have not worked as a developer for many years. I also do not work as a recruiter, so I cannot help you fill positions. If you could, can you adjust your response methodology and only send me positions that actually fit my resume or place me on your “do not call” list?

More than likely, you did a buzzword search on the job boards and emailed your position out to every single person who had the buzzword in their resume. If you received this email more than once, you did the same search on multiple boards and use the same mass email process on each.

If I find your position is of interest, I will send you a response to determine how we should proceed.

Peace and Grace,
Gregory A. Beamer

Here is one email that shows what is going on (not a major shocker):

Hi Gregory,

I am so sorry to disturb you for each and every position. As we are using some kind of third party software, it detects the resumes automatically and send our opening to everyone in the list. Unfortunately, we can’t remove/change your email from that list. But, you can Unsubscribe from that if you see a link in the bottom of the email. Thank you and have a great day. 

Thanks & Best Regards,


Technical Recruiter

So the crux is this

  1. They are using a third party software to copy and paste job requirements into the system
  2. The software automatically searches resumes for any buzzwords in the description
  3. Every time the resume search gets a hit, it automatically sends out an email (mail merge template) to everyone that has a hit
  4. The businesses that use the software cannot stop from SPAMming me
  5. In order to stop receiving these emails, I have to make the effort to unsubscribe from their list

#5 is the most telling, as I get around 150 emails a day, most of which have unsubscribe lists, from nearly as many companies. In order to stop receiving emails that incorrectly target me, I am the one having to make effort to stop them?

I know this Robo-SPAM is efficient, at least from the standpoint of reaching 10,000 people with a single click, but is it actually working? If TekShapers, Xchange Software and ZIllion Technologies (et al) did a study to determine whether it is actually working, would they answer yes? Is the damage to their reputation worth using software that sends out more incorrectly targeted emails than correctly targeted emails? Is the response rate high enough that it warrants such a lazy method of doing “business”?

Apparently they think so. I will never work with any of these guys. I thank “S” (name masked above) for actually taking a bit of time to respond to me. Of the more than 1000 jobs responded to with my auto replier, less than a dozen have actually contacted me back, primarily asking for my resume and some type of email explaining what exactly I would like to do for a living. The rest are in the vegans in a grocery store and I am a slab of meat.

Peace and Grace,

Twitter: @gbworld

How to be an Amateur IT Recruiter

I have received more than 350 emails over the past 2 days with job opportunities. While this sounds great (I am in demand?), most simply show the hallmarks of a amateur recruiter.

As a service to those desiring to be amateur recruiters, as opposed to professional recruiters, I offer the following guide to help you in your quest. Please include each of the following in your daily habits.

Email Basics

The first thing you have to master is email basics.

1. Mark Every Job Opportunity Email as Urgent

Contemporary wisdom says people pay attention to urgent emails more than non-urgent ones, so make sure you mark your email as urgent. True, nothing in your email is really urgent to the recipient (me? other IT professionals?), but who cares. This is not about me, it is about you. The only possible kink in this plan is other people might also be marking their job opportunities as urgent. Let’s look at a picture of my inbox job folder:


Wow, 100% of the job folder is marked Urgent. My suggestion is to unlock the secret “super urgent” button in your mail client and use it so I really know your email is urgent … to you.

You should also bear in mind that adding words like “Immediate need” add more punch.

2. Search By Buzzwords

There is no better way to reach masses of IT professionals and simply scream “I don’t know what the hell I am doing” like firing off job emails based on buzzwords. When I get a $10 an hour support tech email, I am thrilled at the opportunity to increase my stress level at a thankless job that pays a piddly fraction of what I am making now.

And while you are at it do the same blind search on multiple job boards using their massive Spam email generator to generate thousands of emails in a single keystroke. Efficiency at its best.

3. Send Multiple Emails To the Same Person

Nothing wastes more time than making sure you are not sending out multiple responses. Why take more than a few minutes to complete your entire day’s worth of work. Peruse multiple job boards and send out emails using the same buzzword. Sure, about 95% of your list just got multiple emails and knows you are a lazy moron, but perhaps you reach a handful that are only on one board.


The blessing here is you not only show me you have no attention to detail, but you show me I am just a piece of meat to you, completely invalidating my existence. Yes, I want you to be my recruiter, as I like feeling like I am nothing.

4. Bold and Highlight Lots of Sh*t in the Email

Sometimes when I am reading an email, I miss the important stuff, so make sure you not only bold it, but you highlight it as well. Otherwise, I might send in a resume for this VB6 position that pays nothing, in a state far away from my home. Thank God Abhilasha bolded and highlighted Perforce, or I might have done something dumb like send my resume in for the job. Whew! Dodged that bullet.


Even better, bold and highlight the entire email.


Or if you really want to annoy people highlight and use red bolding.


Bonus points to Calvin since I have never worked on a criminal justice application and that is a required skill.

7. Copy and Paste the Entire Email from the Client or Account Manager

Why take time to edit stuff out of the email before sending it out. That takes time and time is money. Let the candidate see how truly lazy you are by including stuff that makes absolutely no sense to the recipient.


Yes, that one is at the top of a job email.

8. Send Out Emails to People Who Fail an Absolute MUST qualification

For example, let’s say you have a job for a person that must be local to Florida. Send it out to everyone. There are bound to be a few people that are ACTUALLY FROM FLORIDA (CAPPED in response to the email).


This particular email really took the cake as it was a complete forward (see #7).

Make The Candidate Do the Work for You

Why actually interview people when you can have them send you all the interview details to


This is a simpler one. Some have dozens of questions that have to be filled out.

Phone Basics

It is not enough to reach the coveted complete amateur title without having amateur phone skills. Here are a few things that can help you in this regard.

1. Show You are Using a Very Old Resume

I, like many people I know, no longer include a home phone on my resume. In fact, I have not included the home phone for about 6 years. When you call me on my home phone, it pretty much says “I don’t have your latest information” and “you are just a slab of meat to me”.

2. Ignore The Candidate’s Requirements

Your job is to convince the candidate to take the job no matter what. Don’t let things like “I currently make twice the max you are offering for this position” deter you from suggesting how much of a “great place to work” it is. If you can keep the candidate talking, maybe he will work for minimum wage.

It should also be of no importance that the candidate states “I do not want to move to Siberia”. If the position is in Siberia, then your job is to keep hounding them until they decide to take up. As long as they are still listening to you, you have a chance, right?

3. Insult Their Spouse

Since your culture devalues women, there is no reason to be polite to an IT candidates wife.

4. Be Demanding

Just because the candidate stated they are busy is of no consequence. Demand they take your call NOW and treat them like the meat slab they are.

5. Call Numerous Times

This is a two parter. In the first part, hang up and call back when you get their answering machine. Many candidates will screen the call until you irritate the crap out of them. In the second, if a candidate states “send me the job requirement and I will look at it later”, call them back every half hour until they convince you they really aren’t interested. Do this even if you get an email stating they are not interested and repeat #2 above.

6. Illustrate You Have Not Read the Resume

Nothing states you are working towards a complete dufus award than asking me a question about an item that appears prominently on my resume. Bonus points if it is both prominent and sits at the top of my resume.


While much of this is tongue in cheek it still amazes me how many think recruiting is simply a matter of finding slabs of meat and sending them to a processing plant. Considering recruiting companies, even subcontracting recruiting companies, can make a good amount with candidates, you would think we would have more professionals out there.

The reason there is a problem is very simple. IT, especially on the development side, has been a seller’s market for over 10 years. This has led to the worst developer on the team making more than 90K (or $50+/hour consulting), but it has also made it somewhat profitable to be lazy as hell and do bare minimum when it comes to recruiting.

There are plenty of professionals out there, and I know quite a few locally. But my inbox is routinely filled up with the amateur yo-yos.

In the next few days, I will show you my method of combatting the yo you farm.

Peace and Grace,

Twitter: @gbworld