The CASA model with a JavaScript SPA


This will be a short one, but came from a question asked while I was working on a green field project for auditing work. The user interface was developed using REACT, which is a JavaScript SPA (Single Page Application) framework. In the exercise, I set up both an API (service layer) solution and a web solution, both using the Core AS Application (CASA) model. The model is shown below:

The CASA (Core AS Application) Model

The two solutions fit together like so, with the web application connecting to the API application via an adapter known as a service proxy. The actual naming was CompanyName.ProductName.Persist.Audit.ServiceProxy.

Website connected to an API

The idea was there was a core for the web project, which was in developed in React, and the API, which was developed using ASP.NET Core WebAPI. A few questions arose during the set up.

Why not have the react website connect directly to the API instead of using its own back end pieces?

What the suggestion was is a picture that looks more like this:

Web application connected directly to API

The answer to this question is multi-fold.

  1. Consistency: You will be using this pattern for all other types of APIs. While using different patterns is not an issue, the more consistency you use in your code the easier it is to …
    • Maintain – If there is a consistent organization, you can find the code in the same place for every connection. There is always an adapter to abstract the connection.
    • Train – If you consistently follow the pattern, you can easily train new developers.
  2. Abstraction: You lose the abstraction in front of the API. In this case, there is no issue, as both are internal, but think about some other capability, like single sign on. If you started with an internal API to log someone on, but the company switch to another solution, like Okta, you now have to completely set up Okta and then switch over (from experience, this is not always easy).
  3. Future Proofing: You switch from abstraction using contracts (interfaces in .NET) and adapters to configuration. If you ever switch APIs, this is not an issue as long as the contracts are identical. If not, abstraction protects you from change. The reality is something will change. Being able to control changes in small steps is a much easier way to future proof your software from change.

All three of these are important. Consistency in patterns definitely helps with maintainability and training. This allows you to be more efficient in your development efforts. But, the abstraction is the bigger one for me, as it future proofs the solution. I have an abstraction to create adapters on so I don’t have to deploy all of the software at one time in a “big bang” integration.

A couple of other notes on this:

  • There are plenty of examples of React in an ASP.NET core application. Talking back to this location and then connecting back to the API through libraries is well documented.
  • When you use adapters, they adhere to contracts through interfaces, fitting the I in SOLID, or Interface Segregation Principle. If you merely connect to the API directly in JavaScript, you have to change implementations through configuration, which means you lose the interface. Can you create an interface in JavaScript? Yes, at least in TypeScript and EF6. But this is a bit newer to many developers. Note that the use of contracts helps with Dependency Inversion (the D in SOLID).
  • Separating everything out helps you adhere to the Single Responsibility Principle, or the S in SOLID.

Can we remove the core on the React application solution and connect the ASP.NET Core application directly to the Service Proxy Persistence classes?

This one comes with the assumption you will not use business logic in the web application. In this case, the Core becomes a pass through with methods that only contain two lines of code (the instantiation of the persistence object and invoking a method). This is not a bad thing, as it will not add a lot of weight to the call and it makes it easier to add any business logic that might be required. But, as you are having .NET code calling the persistence library, you are making changes in the same solution (different project) and it is more of a refactor than a huge change in architecture. Personally, I would leave the pass through, as my experience shows business logic tends to get added as you mature the product.

Summary

The power of the CASA model is it sets up software in a way that is easy to reconfigure. I won’t say it is the only way of doing this. If you have another method to set things up in a consistent manner and leave abstractions between portions of the application, then use your own method.

No matter how you set up software, be sure you have abstractions in place that allow you to change out pieces without having to go through major surgery on your applications.

Peace and Grace,
Greg

Twitter: @gbworld

Why Code Organization is so Important in Software


I am a pattern person. This makes me great at reading and quickly memorizing maps. I can look at a Sudoku board and start mentally filling in without having to think about the numbers too much. I can also look at a software architecture problem, from a single application to integrated systems, and start figuring how to map from where we are to where we need to be. I can also see patterns in software development, not only in what we are doing, but the how and the why.

Today, software development is difficult. Some of this comes from the overly optimistic approach most developers take when sizing a task. Some of it comes from the practices we use. And others still come from business or how we interact with business. In this post, I want to take a look at why code organization is a foundational skill for developing quality maintainable and acceptable solutions.

First a bit of history …

Finding the Root

There are a few things I notice in various places I have consulted or worked over the years.

  1. There is very little testing over the code base.
  2. Refactoring is a luxury when there is
  3. Agility is “practiced” in the development team, but not throughout the Enterprise
  4. Very little is automated (in DevOps words, I see far more CI and far too little CD – a topic for another post?)
  5. There is too much technical debt in the solution

It is not that we don’t have knowledge of many practices that would alleviate these tendencies, it’s just that we have precious little time. This is partially due to our optimism, but also due to pressure of business compounded by the debt built up from the lack of time to refactor. And it goes on.

My first exploration into this topic came from working with my first DevOps assignment. How do you take an organization from fully manual deployment to the point where a code check in could potentially push all the way to production, in discrete, controlled steps?

I started from the end, production deploy, and walked back to the beginning, code check in, with the bookends – or start and end states – being fully manual deployment via documents to a fully automated deployment triggered by a code check in (another lovely topic for a blog post?).

Here are the steps to get from manual to fully automated (and, as you will see, the steps for getting development under control).

  1. Organize the Code Base
    • Find seams in the code
    • Add interfaces
    • Refactor
    • Add Tests
    • Repeat this cycle
  2. Test – Ensure code is covered for acceptability
  3. Refactor
    Repeat the first three steps, as needed
  4. Productize the code base
  5. Create a CI build
  6. Create a CD build for Development
  7. Add in Feedback loops
    Continue steps 5 through 7 until the pipeline is complete

As I spent more time with additional clients, and different issues, I found organizing the code to be the foundation for almost all of them. Here are my statements

  • It is difficult to automate the delivery of software without properly organized code
  • It is difficult to test for acceptability without properly organized code
  • It is difficult to maintain an application without properly organized code
  • It is difficult to modernize a legacy application without properly organized code
  • It is difficult to add new features without properly organized code

In some cases, I would replace “difficult” with “impossible” or at least “nearly impossible”. So, how do we get to organization.

Mi CASA es su CASA

Around 2008, I was working with a client who wanted to build a web application with the same features as a desktop application they currently had. Having spent time in Enterprise software development, my first thought was this: If you are writing the web application in the same language as the desktop application, why do you need to write a new application?

It is still a valid question today. As a result of asking that question, I started thinking about developing a framework for software development. In 2010, I came up with a name: Core As Application. Succinct and expresses the fact our core business logic really is our application. Also hard to digest. Wanting a cool name like “scrum”, I shortened it to CASA (which still means Core AS Application). It took awhile, but I realized it was neither a framework not a methodology, but a model. Below is a copy of the model from 2018.

The CASA (Core AS Application) model

The main point here is the center circle is the beginning point when thinking about an application. It is built on top of domain centric objects (generally state only, no behavior) and any frameworks required. It is fronted by a means of presenting the core to a “user”, which can be a human or another system that will work in sessions of time. Is is back ended by some means of persisting, or preserving, the state between “users” sessions. And, it is topped by enough tests to ensure acceptability of the application from the owners and users of the application.

Key to the model is the idea that the core is linked to the presentation and persistence portions of the system via contracts. In most languages, you call these contracts interfaces. But using these contracts, or interfaces, is where you are able to do things like switching from SQL Server to Oracle, or making a new mobile application with identical features as your current website.

Yes, it gets a bit more complex, as you end up with multiple “cores” in the average system, as you have multiple “applications” or products (as shown in the diagram below where you have the website connecting to one or more APIs). It can also be taken to the point of extreme in microservices architecture.

CASA across solutions/products

The main rule is “each class goes into a library that is identified by its purpose and/or function”, you can start to see the value in adding the area of concern in the project, library and/or assembly names (with apologies using the word assembly if you are a non-.NET developer).

Something New?

Hardly. It is the idea of creating a taxonomy for your code. In the COM days, it was common to think of applications in terms of layers or tiers. Microsoft coined it n-tier architecture, although it was generally in three tiers (Client/UI, Business and Data – see below).

N-tier architecture

And the concepts align well with clean architecture, as you are starting from the business domain out.

Clean Architecture

Think of the domain models at the bottom of the CASA model as Entities, the Core as Use Cases, the contracts and adapters as Controllers, Gateways and Presenters, and the outer points (presentation and persistence) as the blue outer circle in Clean Architecture (External Interfaces, DB, Devices, Web, and UI).

We can also see it in Hexagonal architecture (developed by Alastair Cockburn, one of the original Agile manifesto guys).

Hexagonal Architecture

In Hexagonal architecture, the outside points are presentation and persistence, connected to the Core via ports and adapters. As an even better view of Hexagonal architecture, here is one with a bit of an overlay of clean architecture.

Hexagonal Architecture with Clean Architecture concepts embedded

The Setup

When I am consulting, one of the first things I do with a solution is add in the folders corresponding to the type of code. The folders set up are:

  • Core: The core business logic, aka the “application”. These are behavior objects that contain code specific to the business.
  • Data: This will contain any projects specific to data persistence mechanisms, like a database. In the .NET world, a SQL Server Data Tools project would be an example of what lies here. These are generally segregated out early to productize the database separately from the application so it can be an independent product (avoid “big bang” deployment).
  • Domain: The domain entities and exceptions. These are state objects and general just getters and setters.
  • Framework: These are behavioral in nature by can apply to multiple applications and possibly even multiple domains. I will eventually “retire” these from the application solution and create a product out of the library and put it in a package manager (NuGet for .NET). I will occasionally subclass this further. The primary example would be separating out utility from the core framework libraries.
    • Utility – A general product focused on performing a specific cross-domain function. For example, a print library.
  • Persistence: Behavioral objects related to persisting state between sessions. This will generally be repository objects over a database or service proxies to contact APIs.
  • Presentation: This is where the application connects to “users”, which can be human or other systems. I subdivide this folder into specific types of presentation.
    • API – a web based API endpoint, generally REST in current times. In the past, I have also called this service, but that can be confused with a server based service.
    • Desktop – A user interface to run on the desktop.
    • Mobile – A user interface for a mobile device.
    • Service – A server based service (think windows services).
    • Web – A user interface using HTML or other web constructs, except APIs.
  • Test – Anything related to automated testing of the application. The following subfolders are common, but I will have to write a more extensive post soon on testing, as there are different patterns one can take.
    • Accept – Tests focused on acceptance. While one can cover a great deal of acceptability of the solution in unit and integration tests, these would be focused more on acceptance. An example might be Fitnesse. In general, automate everything you can here so user acceptance is more focused on preference of layout, colors, etc.
    • Integ – Tests for integration. In these tests, actual dependencies are utilized rather than mocks. It should be noted the unit tests will be identical in signature to integration tests, but will utilize mocked dependencies. NOTE: Tests in products like selenium will fall into the Integration folder if they are included in the solution.
    • Load – Tests for load testing. This is where load specific tests would be if you include them in the solution.
    • Unit – Tests focused on testing a unit of code. All dependencies will be mocked with known return values. It should be noted the integration tests will be identical in signature to unit tests, but will utilize the actual dependencies.

As you go through the above list, you should note the two in bold italics. There are the types of projects you productize and remove from solutions. The data projects will become automated database deployment and changes should be completely decoupled from deployment of applications (in other words, you should take small steps in data changes and do them ahead of application deployment rather than create a need to deploy at the same time). The Framework should move out so they can be independently updated and packaged for all applications that use them.

How does this look in development? Below is an empty solution in Visual Studio with the folders.

Visual Studio Solution Using CASA folders

In Practice

The practice varies slightly between green field (new development) and brown field (currently deployed development) solutions. Let’s look at each independently.

Green Field

With green field development, I will start by creating the various projects in the solution in the proper folders, as shown below. 

Sample CASA solution with projects in named folders

The general naming convention is:

Organization_Name.Product_Name.Area.SubArea

In this naming convention, the area is the folders: Core, Domain, Persistence, Presentation and the like.

It takes a bit more time, but I like to match up my code folders in the solution with the file system. This is done as follows.

Create the New Project by selecting the folder it will go in, right clicking and choosing add project.

Adding a new project to the solution

Add the correct type of project. In this case, I am setting up the Core business logic in .NET Core (unintentional play on words?).

Adding .NET Core for the Core

Name the project. In this example, it is the Core project for the CASA_Example product. Note that the folder is changed (highlighted).

Add the folder name to the file location

Once I am done, all of the projects will be in their correct folders, both on the file system (left) and in the solution (right).

Side by side to show the file system matches the view in Visual Studio

Brown Field

If the brown field product was already organized and the team followed the practice religiously, there is nothing to do. But, if this is a solution that requires reorganization, you have some work. This is a huge topic and worthy of a blog post of its own. I will do this soon on a post on legacy modernization.

One common example would be a solution in which all, or nearly all, of the code was present in the user interface/presentation project. In this case, you have to start organizing the code. These would be the steps I would take to do this.

  1. Rename the presentation project. This is not completely necessary, but doing this and moving the project to the correct folder on the drive is a good first step, as it starts setting up the process in your team early. A few steps here.
    • Go ahead and fix the namespace of the files in this project now. This is not mandatory, but it helps you avoid having to fix using statements later. This may take a bit of time, but it pays to keep your library maintained.
    • Double check naming on custom folders and adjust this now, as well.
  2. Search for boundaries/seams in the code. The natural seams are:
    • Any code that access a persistence mechanism
    • Any code that has core business logic (use cases)
    • Any code that serves a utility (or helper) function
  3. Migrate this code into a folder in the project with the proper area name.
  4. Create an interface for the new class. This is your contract.
  5. Create tests.
  6. Refactor. You may have to run back over different steps above at this time.
  7. Repeat steps 1-3 until you have all code that is not presentation related segregated. I start from persistence, then to framework, then to core, etc. The idea is migrate out the dependencies that are farther out in the clean architecture diagram first (or focus on the far right of the CASA model first.).
  8. Migrate code to new projects in the proper areas. Scroll back up to green field to see how to do this.
  9. You may, again, have to iterate.

I will focus on a post on legacy modernization of .NET applications some time soon to go through all of these.

Discussion

I have done this exercise with various companies with a lot of success. It is a bit difficult to get used to at first, but once the team is used to the concept, it flows nicely. It also gives a nice model to set up where code should go and makes it very easy to find the source of a problem with the code base.

When I introduce this concept, I get a few questions on a regular basis.

How strict should I be to adhering to names in my projects, namespaces, etc.? This is a naming convention. Like any naming convention, you should be very strict to using it consistently. It should be part of your code reviews like naming classes.

But do I have to use the names Core, Domain, Persist, Present, etc.? Honestly, I don’t care if you name it Fred, Wilma, Barney, and Betty, as long as the names make sense to you. I will state, however, I think Flintstones names are stupid, but mostly because it becomes another thing to train new developers on. I can see eyes rolling when you say “In this company, the core business logic is called Fred because Fred was the head of the Flintstone household.” Whatever floats your boat. 😀

I had a client want to change Present, Core, and Persist to UI, Business, and Data. I don’t particularly like these names, as not all presentations are user interfaces, not all persistence is through data storage mechanisms, and using the three sounds very n-tierish. But, it was what they wanted.

I had another client who asked if Core could be App, for application. I was fine with this. I prefer not to, as I feel Core is a better representation of “core business logic”, but if you regard this an Application, or App, role with it.

The important part is be consistent and be rigorous in upholding your standards. When you get loosey goosey on this one, you start making it more difficult to maintain your code base and you return to square one. You also risk introducing technical debt when you move away from consistency.

Can I use Clean Architecture? Sure. This would mean using the following folders.

  • Enterprise
    • Entities (instead of domain)
  • Application
    • UseCases (instead of core)
  • InterfaceAdapters
    • Controllers
    • Gateways
    • Presenters
  • Framework
    • Web
    • UI
    • ExternalInterfaces
    • DB
    • Devices

The thing I am not completely sold on here is the Presentation and Persistence are both under the Framework area and it muddies the word Framework, as the .NET Framework would be completely different from your website.

But, if this is how you want to do it, that is your choice. You can do the same with Onion Architecture or Hexagonal Architecture.

Summary

Code organization is one of the issues I see most often with clients. This goes in the direction of single monolithic solutions or very granular solutions to ensure the code can be found for maintenance. Very seldom do I find code organized in a way it is very easy to maintain.

Once your code is divided into proper spheres of influence, you will still have to take some additional steps to complete the maintainability of the solution. Tests are a key component, as is determining the names for different types of classes. As an example, do you call a core class that acts as a pipeline for a particular type of information a manager, a service, or something else.

Peace and Grace,
Greg

Twitter: @gbworld

Architecture: The CASA Model- Getting to the Core


About 9 years ago, I was looking at code from a client and dealing with how hard it was to maintain compared to new code I was writing. In the process, I started seeing patterns in the code that I saw in numerous code bases I had worked in over the years. In particular:

  1. Most code was not organized well for maintainability
  2. Best practices were known by all developers, but few were practiced
  3. Very little automated testing
  4. The systems were not domain focused
  5. Switching from one user interface (say windows forms application) to another (web) required a lot of work, as the developers created a new application.

One issue that led to this mass of spaghetti code was the n-tier architecture idea, which gave people the principle that an application is made up of a user interface, business logic and data. Thus, a windows application and a web application were two different things. Naturally, since they were different things, you had to build them separately. In reality web application and a windows applications, as we know them, are not applications. Web and windows are two means of presenting applications. 

That simple paradigm shift lead me to start considering methodologies used for development, so I created what I thought was a new, or at least varied, methodology. I called it Core as Application. The original premise looked like this.

 

Core As Application, circa 2010

In this model, the Core IS the Application. This means an application is made up of domain objects (State) and actions on the domain objects (Behavior). In short, software is a bunch of BS? 

To facilitate the application, I need a means of presenting information to an interacting with users (May be human users or machine users). And, I need a means of persisting state between sessions. I should be able to switch presentations and persistence mechanisms without writing a NEW application, as the Core IS the application. 

NOTE: Core AS Application was a progression of Core Is Application so I could have a nice acronym.

There were a couple of things I did not realize at the time.

  1. Core as Application is NOT a methodology. It is a model. 
  2. Other people were working the same model at the same time, although focused very much on the architecture. You know them as Onion Architecture, Hexagonal Architecture and Clean Architecture

On point #1, I refined the concept to the Core AS Application model (CASA for short, since it needed a cute name). On point #2, all of the “architectures” mentioned deal with the same concept: domain models are the center, followed by business logic. I can can cover these in another post in the future.

Before moving forward, let’s cover a few terms as I envision them right now in CASA. The definitions are oversimplifications to an extent, but help keep you, the reader, on track.

Capability: A business capability is a function or service a business provides to its clients, internal and/or external.  Applications should be centered around capabilities and systems around 

Application: The software solution providing centered around a business capability.  In CASA, the application will be a single core with contracts separating it from presentation and persistence mechanisms.

System: A program or set of programs to solve a business problem that can span multiple capabilities. System can be the same as an application, in simple business problems, but can also required integration with other applications. As an example, an application that connects to multiple services to accomplish its work is a system. In CASA, if you see more than one core, it is an integrated system.

Solution: Solution is used here to indicate a packaging of separate projects to create an application. In this regard, it is synonymous to a Visual Studio Solution. A project will be a single Visual Studio project. 

In this post, I want to cover CASA as it exists today and why it works well in development.

The CASA Model 2018

The general principle of CASA is the application is the business logic at the center of the model. This means I can switch presentation and persistence without changing the application. I sit this core (behavior) on top of domain models (state), as the domain is the foundation of the application. I place contracts on either side of the Core, as I want to facilitate testing, which sits on top (the Contracts serve more than just aiding tests – to this in a moment). And testing, primarily automated, sits at the top. The model looks like this today.

There are a couple of differences between this model and the original.

  1. The domain is no longer part of the core, but the foundation. 
  2. Contracts have been added between the core and the persistence mechanism (in general, this will be a database)
  3. Contracts have been added between the core and the presentation mechanism (normally a user interface, but could be a service of some other machine to machine mechanism).
  4. Tests have been added on top.
  5. Adapters have been placed next to the contracts. 

In general, each of the various boxes will be contained in separate projects in a solution (assuming .NET, but the concept can be used in other programming paradigms). Here is a sample solution showing how this is used. 

For a better understanding of the value of the model, let me tie it back to some other industry concepts you might be familiar with, some of which were included in the design of the CASA model: Domain Driven Design, Contract-Fist Development, SOLID Design Principles, DRY, YAGNI and KISS, although some of the concepts are implicit rather than explicit.

  • Domain Driven Design – I don’t use all of Eric Evan’s domain driven principles, as I am rather stringent on separating my BS (behavior and state). I have 2 common domain projects in every solution: one for models and another for exceptions and I do not include behavior in a domain object, unless it is related to the object. As an example, an address may have a ToString() method that delivers a concatenated address – this is behavior, but specific to the object.
  • Contract First Development – The concept of writing concepts prior to developing is not new. In C++, you create interfaces first, in the form of .h (header) files. Delphi, which may, in some ways, be considered the inspiration for C#, also had a contract first approach. With REST APIs, the contract is a bit more implicit and found in the URL. For CASA, the idea is write the contracts first to ensure you understand the seams in your application.
  • Test Driven Development – I am definitely one that believes writing tests first helps you ensure your code is focused on what your users desire. I use code coverage metrics with my teams, but it is just a measure not a goal. 100% code coverage with bad tests is as unacceptable as very low coverage. I should also note acceptability is the real goal, not writing tests. 
  • SOLID Design Principles – Organizing code into specific areas of concern cover the (S Single Responsibility), while contracts handle the I (Interface Segregation) and aid the D (Dependency Inversion). The O (Open/Closed Principle) and L (Liskov Substitution) are not directly addressed by CASA, although using interfaces will help you think twice about breaking contracts, especially when you end up with multiple classes utilizing the same interface.
  • DRY – Do Not Repeat Yourself. This is not inherent in the model, so you have to create a process to handle this, as well as governance. For the process, when you find repeat code in multiple classes (or worse, projects), you should start refactoring ASAP. A couple of normal methods to employ would be to move the common code into a base class (two classes on the same interface, for example). Or you move the code into a separate class and/or project and reference it. As for the governance, if you are not using code reviews as part of your development process, you should. 
  • YAGNI and KISS – Also not directly covered in CASA, but you find the separation of the code makes it much easier to solve problems through testing and focusing on one problem at a time. Once you grasp the concept, you will find it easier to keep things simple and avoid bloating your code. If you do try to think ahead, you will find the model fights against you. In the future, I plan on talking a bit about testing with CASA and will add the link here.

Expanding CASA

As I started examining the model, I started thinking about solutions I had seen in various Enterprises. The area CASA seemed to fail is when you added a services layer to the application. First, some developers had an issue with the paradigm shift involved with thinking of services as a presentation concept. A bigger issue is what happens when you add a tier of services, as shown below:

I adjusted this by moving services to its own folder in the solution. This helped with the paradigm shift. Recently, I have found this also works with mobile applications, as you can use the same persistence library for the mobile application and the server side application. 

I should note, the service holds one application and the user interface, which used the service, holds another (at least potentially). So I had two cores, which means another solution. From an organization standpoint, you have two applications. When I add a second service, it becomes more evident. If you focus on microservices, which are mini applications with a service front end (yes, a simplification), it is absolutely clear an application with a service, layer or not, is actually multiple applications. I am not one for having too many projects in a solution, but as long as you can develop without massive compile and test cycles, it is a place to start. Solve then refactor.

From the standpoint of the model, it became clear I simply needed to chain two cores together with a service, which looks like this:

Moving to Mobile

In the past 6 months, I have gotten heavily focused on mobile presentation in React and Xamarin. As React cannot be directly compiled using Visual Studio, I won’t cover it here (I actually think I might be able to solve that problem, but that is another story). Xamarin is a bit easier, but changes a few things in organization.

In particular, with Xamarin, you end up having to compile at least parts of your core for each platform, especially in an offline application. You will be persisting to both the service (server side) and the mobile database from the mobile device. And to a server side database when the mobile application contacts the server to persist.

  1. Create a subfolder called Mobile under the Present folder. This is where the Xamarin.Forms project and the iOS and Android specific libraries go (I will likely rework this in the future, as these are not technically presentation libraries, but utilities).
  2. Make sure any libraries that are used on the server and the mobile device are .NET standard. NOTE: I think Core can work, but standard provides a few other options. 
  3. If you require any functionality that uses the full .NET Framework, ensure it is behind a service, as Xamarin cannot compile the full framework to native code.
  4. To facilitate the mobile application saving locally and on the server, configure it to use 2 implementations of the same contract: one to save locally and one to save via the service. The service will use one implementation of the contrract, of course.

I currently have the service in the same solution as the mobile application. When I separate it out as a separate product, I will have two solutions pointing to the same contracts for persistence. This may necessitate placing the contracts as their own product to avoid development in one solution breaking the other. I think the team is small enough to not jump either of these hoops right now (NOTE: I envision an entry on product mindset this week and will link it here). 

So, here is the solution at present. (NOTE: This is a work in progress, as I am still developing this. Thus far it is working)

This seems awfully hard?

Any time you switch paradigms, it is hard. For years after .NET was released I saw developers coding standard VB in VB.NET projects, as they had not made the paradigm shift. Microsoft made it easy to avoid making the shift by adding training wheels to VB in the form of the library Microsoft.VisualBasic.Compatibility. During the early .NET years, I used to recommend switching to C# as the paradigm shift would become part of learning the new language syntax. 

Once you get the paradigm shift, however, you will notice it becomes very easy to find code when you discover a bug. Part of this is in the organization the model provides, but a good deal of it is in learning the concept of a domain (which is outside the scope of this post).

Does it Work?

I have yet to see a case it does not. But I am also not arrogant enough to state you must use CASA to get the benefits. Below are the design principles I am using. If you already adhere to all of these principles, you might think twice before adopting what I have written here. Some of these principles go beyond the concepts in CASA and will be covered on this blog later.

  • Domain Driven Design – Focus on the business problem you are solving. Domain Driven Design will aid you in this principle. Your state objects should be named in terms familiar to your business.
  • Contract First Development – Understand the seams in your application before coding anything around the seams. To simplify, as you deal with different technical concerns, consider it a seam. In CASA, there is a contract for persistence, one for the core and one for any dependency that can be swapped out, or capability. If you ever feel another team could write a separate implementation, you write a contract. Why first? Because, like Test Driven Development, you learn about your code before you code it.
  • Requirements – Write requirements for acceptance. This is a pre-step for testing, as understanding what is acceptable will make it much easier to write a full enough set of tests.
  • Test Early – I am not a 100% purist in test driven, but I am a stickler for writing a test first on any problem you are not 100% sure of (which is almost everything) and adding tests on top of those that you are. (NOTE: Over time you will find some things you were 100% sure of at first, you should have written the test first, as you were cocky in your confidence). One more point: Acceptability should be a goal for all testing, not just acceptance testing. In standard TDD style, this means writing more test methods. In behavior driven, it means acceptability in your specifications.
  • Coverage – Code for acceptability, not percent. Percent is a good metric to find areas that might be lacking, but never incent on code coverage, as humans are very good at being creative to get incentives. Good coverage includes the following: 1) covering business use cases completely to ensure user acceptance 2) covering unacceptable cases 3) covering the fringe. I will focus on this in another post (and link here).
  • Testing errors – If a bug is found, or a lack of acceptability is discovered, you immediately write a failing test. If you cannot write a test that covers the condition that FAILS, you do not understand the bug well enough to fix it, so you SHOULD NOT try to fix it. (NOTE: lack of acceptability is not the same thing as a bug. If a user discovers something does not work how he or she intended, it is not a bug, but a change in understanding of what is acceptable – same process, but don’t have your team take the blame when someone realizes their vision does not work as they would like).
  • Build the tests up from the unit – If you are focused on covering for acceptability, you will find the exact same tests you run as unit tests are often run as integration testing. With contracts, you will mock the contract for your unit test and use a concrete implemenation of the contract for integration tests. If you use behavior focused tests, you will find you are running the same tests, just changing dependencies (did that blow your mind? If not, you are already in the Zen garden). Because I see the persistence store (database generally) as the castle that protects the king (data is king for business), I write an integration test on this early on. I might write it first, especially on non-user interfacing persistence applications. But I sometimes unit test the presentation first to ensure I understand what the user intended and then go to the database. Knowing how the user will interact with the system and how I store (optimized data store, not just a CRUD mock of the business objects), I can mock the persistence layer and unit test all core business functionality. If what I just wrote sounds a bit confusing, I will write a blog entry about this soon.
  • Product Mindset – This is the most complex topic (and deserving of its own blog entry – link here in the future). A few things I recommend. If you find you are no longer actively developing on part of your code, consider making it into a product. If you started with CASA at the start, these will likely end up being utility classes that you want to reuse, but this technique can also serve well if you have not use CASA, or similar, in the past. The main point is once you find a capability, add a contract and move the code to its own project, then to its own solution, and finally put it in your own NuGet repository. (I guess I also need to talk capability thinking in the future? – link here in the future).
  • Agility – This really has nothing to do with CASA, at least not directly. But Agile gets you thinking in small bits of functionality at a time, which works very well with CASA.

In Summary

In this entry, I focused on the Core AS Application, or CASA, model. CASA is a model that focuses on a visual representation of code organization with a focus on best practices, like Domain Driven Design, Contract First Development and SOLID Design Principles.

In the future, I will break this down further, focusing on how this works with various concerns, such as Requirements, Testing, Product Focus and Agile Development.

Peace and Grace,
Greg

Twitter: @gbworld

Windows 10: Getting around “Buy Wi-Fi” on membership networks


So I found a nice “feature” in Windows 10 that really annoys the crap out of me. It is the buy Wi-Fi feature. It seems like it might be a nice feature at times, like when you are somewhere, cannot find free Wi-Fi (rare) and need to connect to get some business done.

The problem with the feature is networks you have membership on, that Microsoft has contracted with, default to “buy wi-fi”, as in the follow capture

image

That’s right, with a Marriott Platinum status, sitting in a room in the hotel, the default is to buy Internet. This is really irritating.

Getting On Without Paying (This case “paying twice”)

So this really bugged me until I got to playing with it. If the Network is one you should be able to get on, you can click on the network and the following pops up. Click on the link circled below:

image

Once you click “other options from the provider”, the logon screen for the hotel appears. In this case, it shows I am still connected from last night. The network will boot me off in a bit.

image

If you select view services, you will then see your purchase options for the network.

image

And there is one more caveat. To get on the network, if you don’t know this already, you have to download the Windows 10 Wi-Fi app from the Microsoft Store just to see the options. Until you download the app, you don’t even have the option to explore other options with the provider.

Personally, I think this is a fail and Microsoft needs to rethink. It would be better, in my opinion to have “other options” be the buy option, as this irritates the crap out of users. Oh, wait, Windows is the only option when you have certain careers?

Peace and Grace,
Greg

A foray into micro services in .NET


Last month, I was given an assignment to attend a workshop with one of our clients. The workshop, as it was called, turned into something more like a hackathon, with various groups attempting to POC some concepts. The big talk of the workshop was micro services.

If you are not familiar with micro services, you should take a look at Martin Fowler’s article. The idea behind micro services is simple. You take an application (which might be a service) and then break it down into smaller, very focused services.

As an example, consider eCommerce. You have online store, which uses a service to handle the various actions required for a customer to order items and have them shipped and tracked. If you examine this from a micro-services perspective, you will separate out the customer shopping cart, order processing, fulfillment and tracking into different services. As you dig deeper you might even go further.

In this entry, I want to dig into micro services a bit deeper, from a Microsoft .NET perspective, but I first want to give some foundation to the micro services concept by understanding how we got here in the first place. That is the main focus of this entry, although I will talk about a high level implementation and some pros and cons.

NOTE: For this article, as with most of my posts, I will define the word “application” as a program that solves a business problem. From this definition, it should not matter how the application interacts with users, and you should not have to rewrite the entire application to create a different presentation (for example, windows to web). I use define the word “solution” as a particular implementation of an application or applications.

Foundations

When we first learn to create applications, the main focus is on solving the problem. This is most easily accomplished by creating a single executable that contains all of the code, which is commonly called a monolith (or monolithic application).

TBD

Monoliths are most often considered bad these days, but there are pluses and minuses to every approach. With the monolith, you have the highest performance of any application type, as everything runs in a single process. This means there is no overhead marshalling across application processes spaces. There is also no dealing with network latency. As developers often focus on performance first, the monolith seems to get an unfair amount of flack. But there are tradeoffs for this level of performance.

  1. The application does not scale well, as the only option for scale is vertical. You can only add so many processors and so much memory. And hardware only runs so fast. Once you have the most and fastest, you are at maximum scale.
  2. There is a reusability issue. To reuse the code, you end up copying and pasting to another monolith. If you find a bug, you now have to fix it in multiple applications, which leads to a maintainability issue.

To solve these issues, there is a push to componentizing software and separating applications into multiple services. Let’s look at these two topics for a bit.

Componentizing for Scale (and Reusability)

While the monolith may work fine in some small businesses who will never reach a scale that will max out their monolith, it is disastrous for the Enterprise, especially the large Enterprise. To solve this “monolith problem”, there have been various attempts at componentizing software. Here are a list of a few in the Microsoft world (and beyond):

COM: COM, or the Component Object Model is Microsoft’s method for solving the monolith problem. COM was introduced in 1993 for Windows 3.1, which was a graphical shell on top of DOS. COM was a means of taking some early concepts, like Dynamic Data Exchange (DDE), which allowed applications to converse with each other, and Object Linking and Embedding (OLE), which was built on top of DDE to allow more complex documents by embedding types from one document type into another. Another common word in the COM arena is ActiveX, which is largely known as an embedding technology for web applications that competed with Java applets.

DCOM and COM+: DCOM is a bit of a useless word, as COM was able to be distributed rather early on. It was just hard to do. DCOM, at its base level, was a set of components and interfaces that made communication over Remote Procedure Call (RCP) easier to accomplish. DCOM was largely in response to the popularity of CORBA as a means of distributing components. COM+ was a further extension of DCOM which allowed for distributed transactions and queueing. COM+ is a merger of COM, DCOM libraries and Microsoft Transaction Server (MTS) and the Microsoft Message Queue service (MSMQ).

.NET: If you examine Mary Kirtland’s early articles on COM+ (one example here), you will read about something that sounds a lot like .NET is today. It is designed to be easily distributed and componentized. One problem with COM+ was DLL Hell (having to clean out component GUIDs (globally unique identifiers) to avoid non-working applications). .NET solved this by not registering components and returning to the idea of using configuration over convention (a long fight that never ends?).

Competing technologies in this space are EJB and RMI in the Java world and CORBA and the CCM as a more neutral implementation. They are outside of the scope of this document.

The main benefit of technologies are they make it easier to reuse code, reducing maintainability, and allow you to more easily distribute applications, providing greater availability and scalability. You can still choose to build monolithic applications, when they make sense, but you are not tied to the limitations of the monolith.

Services

One issue with many of the component technologies was they tightly coupled the consumer to the application, or the “client” to the “server”. To get away from this, a group of developers/architects at Microsoft came up with SOAP (Yes, I know Don Box was actually working for DevelopMentor as a contractor at Microsoft and Dave Winer was heading UserLand software and only partnering with Microsoft on XML and RPC communication, but the majority of the work was done there). With the creation of SOAP, we now had a means of creating applications as services, or discrete applications, which could focus on one thing only. That is sounding a lot like micro services … hmmmm.

In the Microsoft world, SOAP was used to create “web services”. The initial release of .NET in 2002 allowed one to create services using HTTP and SOAP as ASMX services (a type of document in ASP.NET) as well as create faster RPC type services with Remoting (generally these were internal only, as it was hard to play with them outside of the Enterprise, due primarily to more tight coupling to technologies, much less play with them outside of the Microsoft world).

By 2006, with the release of .NET 3.0, Microsoft had merged the concepts of Remoting and ASMX web services in the Windows Communication Foundation (WCF). You could now develop the service and add different endpoints with ease, allowing for an RCP implementation and a web implementation off the same service. WCF really came to fruition about a year later with the release of .NET 3.5.

The latest service technology to enter the fray is Representational State Transfer (REST). In the Microsoft world, REST was first introduced in REST toolkit, an open source project. From the standpoint of an official Microsoft release, it was released as the WCF Web API. It was a bit kludgy, as WCF works in a completely different paradigm than REST, so the project was moved over the web group and is now implemented on top of ASP.NET MVC as the ASP.NET Web API.

Methodologies, Technologies and Tools

One more area we should look at before moving to micro services are methodologies, technologies and tools used to solve the monolith “problem”.

Methodologies

The first methodology that gained a lot of acceptance in the Microsoft world was the n-tier development methodology. The application was divided into UI, Business and Data tiers (note: today Microsoft calls this Presentation, Middle and Data tiers), with the push towards separating out the functionality into discrete, purposed pieces. Below is a typical n-tier diagram for an application.

Around 2010, I realized there was a problem with n-tier development. Not so much with the methodology, as it was sound, but with the way people were viewing the models. Below is an n-tier model of an application:

The issue here is people would see an application as the merger of presentation, business logic and data. Is this true? Let’s ask a couple of questions.

1. If you create a web application with the exact same functionality as a windows application is it 2 applications or one? In implementation, it was often treated as two, but if it was logically and physically two applications, you were duplicating code.

2. If you want to add a web service to expose the functionality of the application, do you rebuild all n tiers? If so, should you?

My view is the application is the part that solves business problems, or the core of the functionality. You should be able to change out where you persist state without changing this functionality. I think most people understand this as far as switching out one database server for another, like SQL Server to Oracle. When we get to the logical level, like changing out schemas, a few get lost here, but physical switches with the same schema is well known and most find it easy to implement. Switching out presentation is what most people find more difficult, and this is generally due to introducing logic other than presentation logic in the presentation portion of the solution.

NOTE: The naming of the tiers in 3-tier and n-tier architecture has changed over time. Originally, it was common to see UI, Business and Data. The illustration above calls the tiers Client, Middle and Data. I have also seen Presentation, Application and Persistence, which is closer to the naming I would use in my models.

To better illustrate this concept, in 2010 I came up with a methodology called Core as Application (seen in this article on the importance of domain models). In this model the core libraries ARE the application. The libraries for presentation can easily be switched out, and have responsibility for shaping the data for presentation.

image

The “Core as Application” model requires you start with your domain models (how data is represented in the application) and your contracts, both for presentation and persistence. Some of the benefits of this model are:

  1. Focusing on domain models and contracts first pushes the team to plan before developing (no, there is no surefire way to force people other than tasers ;->). This is a good representation no matter what methodology or model you use, but it is critical if you are going to have multiple teams working on different parts of a solution.
  2. You can have multiple teams working in parallel rather than relying on completing one layer prior to working on another. You will have to resynchronize if any team determines the contract needs to be changed, but the amount of rework should be minimal.

When you look at “Core as Application” from a SOA standpoint, each service has its own core, with a service presentation layer. The persistence for higher level applications is the individual services. This will be shown a bit later.

Technologies

We have already covered some technologies used to solve the monolith problem. COM and .NET are good examples. But as we move even deeper, we find technologies like the ASP.NET Web API is useful. The technologies do not force us to not create monoliths, as even Microsoft pushes out some examples with data access, as LINQ to SQL, in a controller in an MVC application. But they do get us thinking about creating cohesive libraries that serve one purpose and classes that give us even more fine grained functionality.

Tools

We also have tools at our service. Visual Studio helps us organize our code into solutions and projects. The projects focus on a more fine grained set of functionality, helping us break the monolith up. If we follow best practices, our solutions end up with more projects, which can easily be broken down into individual micro services. Speaking of which, this a good time to segue.

Onward to Micro Services

Micro services are being presented as something new, but in reality, they are nothing more than Service Oriented Architecture (SOA) taking to the nth degree. The idea behind micro services is you are going extremely fine grained in your services. It is also stated you should use REST, but I don’t see REST as an absolute requirement. Personally, I would not aim for SOAP as there is a lot of extra overhead, but it is possible to use SOAP in micro services, if needed. But I digress … the first question to answer is “what is a micro service?”

What is a Micro Service?

I am going to start with a more “purist” answer. A micro service is a small service focused on solving one thing. If we want to be purist about it, the micro service will also have its own dedicated database. If we were to illustrate the order system example we talked about earlier, using the “Core as Application” model, the micro-services implementation would be something like the picture below.

image

If you want to take it to an extreme, you can view micro services the way Juval Lowy viewed in 2007 (video gone, but you can read the description). His idea was every single class should have a WCF service on top of it. Doing so would create a highly decoupled system possible, while maintaining a high level of cohesiveness. Micro services does not dictate this type of extreme, but it does recommend you find the smallest bounded context possible. I will suggest a practical method of doing this a bit later.

image

One difference between Juval Lowy’s suggested “one service per class” and micro services is the suggested service methodology has changed a bit. In 2007, WCF was focused on SOAP based services. Today, REST is the suggested method. Technically, you can develop a micro service with SOAP, but you will be adding a lot of unnecessary overhead.

Below are Martin Fowler’s characteristics of a micro service:

· Componentization via services – This has already been discussed a bit in the previous two paragraphs. Componentization is something we have done for quite some time, as we build DLLs (COM) or Assemblies (.NET – an assembly will still end in .DLL, but there is no Dynamic Link Library capability for COM inherently built in without adding the Interop interfaces). The main difference between an assembly (class library as a .dll file) and a micro service is assembly/dll is kept in process of the application that utilizes it while the micro-service is out of process. For maximum reuse, you will build the class library and then add a RESTful service on top using the ASP.NET Web API. In a full micro services architecture, this is done more for maintainability than need.

· Organized around business capabilities – This means you are looking for bounded contexts within a capability, rather than simply trying to figure out the smallest service you can make. There are two reasons you may wish to go even smaller in micro services. The first, and most reasonable, is finding a new business capability based on a subset of functionality. For example, if your business can fulfill orders for your clients, even if they are not using your eCommerce application, it is a separate business capability. A second reason is you have discovered a piece of functionality can be utilized by more than one business capability. In these cases, the “business capability” is internal, but it is still a capability that has more than one client. Think a bit more about internal capabilities, as it may make sense to duplicate functionality if the models surrounding the functionality are different enough a “one size fits both (or all)” service would be difficult to maintain.

  • Product Not Projects – This means every service is seen as a product, even if the only consumer is internal. When you think of services as products, you start to see the need for a good versioning strategy to ensure you are not breaking your clients.
  • Smart Endpoints and dumb pipes – I am not sure I agree completely with the way this one is worded, but the concept is the endpoint has the smarts to deliver the right answer and the underlying delivery mechanism is a dumb asynchronous message pump.
  • Decentralized Governance – Each product in a micro services architecture has its own governance. While this sounds like it goes against Enterprise concepts like Master Data Management (MDM), they are really not opposites at all, as you will see in the next point
  • Decentralized Data Management – This goes hand in hand with the decentralized governance, and further illustrates the need for Master Data Management (MDM) in the organization. In MDM, you focus on where the golden record is and make sure it is updated properly if a change is needed. From this point on, the golden record is consulted whenever there is a conflict. In micro services, each micro service is responsible for the upkeep of its own data. In most simple implementations, this would mean the micro service contains the golden record. If there are integrated data views, as in reporting and analytics, you will have to have a solution in place to keep the data up to date in the integrated environment.
  • Infrastructure Automation – I don’t see this a mandatory step in implementing micro services, but it will be much harder if you do not have automation. This topic will often start with Continuous Integration and Continuous Delivery, but it gets a bit deeper, as you have to have a means of deploying the infrastructure to support the micro service. One option bandied about on many sites is a cloud infrastructure. I agree this is a great way to push out microservices, especially when using cloud IaaS or PaaS implementations. Both VMware and Microsoft’s Hyper-V solutions provide capabilities to easily push out the infrastructure as part of the build. In the Microsoft world, the combination of the build server and release management are a very good start for this type of infrastructure automation. In the Linux world, there is a tool called Docker that allows you to push out containers for deployment. This capability also finds its way to the Microsoft world in Windows Server 2016.
  • Design For Failure – Services can and will fail. You need to have a method of detecting failures and restoring services as quickly as possible. A good application should have monitoring built in, so the concept is nothing new. When your applications are more monolithic, you more easily determine where you problem is. In micro services, monitoring becomes even more critical.
  • Evolutionary Design – I find this to be one of the most important concepts and one that might be overlooked. You can always decompose your micro services further at a later date, so you don’t have to determine the final granularity up front. As a micro service today can easily become an aggregator of multiple micro services tomorrow, There are a couple of concepts that will help you create your microservices we will discuss now: Domain Driven Design and Contract First Development.
Domain Driven Design

Domain Driven Design (DDD) is a concept formulated by Eric Evans in 2003. One of the concepts of DDD that is featured in Fowler’s article on micro services is the Bounded Context. A bounded context is the minimum size a service can be broken down into and still make sense. Below is a picture from Fowler’s article on Bounded Contexts.

When you start using DDD, you will sit down with your domain experts (subject matter experts (SMEs) on the domain) and find the language they use. You will then create objects with the same names in your application. If you have not read Eric Evans Domain Driven Design book, you should learn a bit about modeling a domain, as it is a process to get it right.

NOTE: you are not trying to make your data storage match the domain (ie, table names matching domain object names); let your database experts figure out how to persist state and focus on how the application uses state to create your domain objects. This is where Contract First comes into play.

Contract First Development

Once you understand how your objects are represented in your domain, and preferably after you have a good idea of how the objects look in your presentation projects and your data store, you can start figuring out the contracts between the application, the presentation “mechanism” and the persistent stores.

In general, the application serves its objects rather than maps them to presentation objects, so the contract focuses on exposing the domain. The presentation project is then responsible for mapping the data for its own use. The reason for this is presenting for one type of presentation interface will force unnecessary mapping for other types. As an example, I have seen n-tier applications where the business layer projects formatted the data as HTML, which forced writing an HTML stripping library to reuse the functionality in a service. Ouch!

How about the persistence “layer”? The answer to this question really depends on how many different applications use the data. If you are truly embracing micro services, the data store is only used by your service. In these cases, if storage is different from the domain objects, you would still desire spitting out the data shapes that fit the domain objects.

How to Implement Microsoft Services, High Level

Let’s look at a problem that is similar to something we focused on for a client and use it to determine how to implement a micro services architecture. We are going to picture an application for an Human Resources (HR) services company that helps onboard employees We will call the company OnboardingPeople.com (and hope it is not a real company?).

Any time a new employee comes on board there is a variety of paperwork that needs to be completed. In the HR world, some of the forms have a section that is filled out by the employee and another that is filled out by a company representative. In the micros services application architecture, we might look at creating a micro service for the employee portion and another for the employer portion. We now have a user facing application that has two roles and uses the two services to complete its work. Each service surrounds a bounded context that focuses on a single business capability.

image

Over time, we start working with other forms. We find the concept of an employee is present in both of the forms and realize employee can now represent a bounded context. The capability may only be internal at this time, so there is a question whether it should be separated out. But we are going to assume the reuse of this functionality is a strong enough reason to have a separate capability. There is also a possibility the employer agent can be separated out (yes, this is more contrived, but we are envisioning where the solution(s) might go).

image

If we take this even further, there is a possibility we have to deal with employees from other country, which necessitates an immigration micro service. There are also multiple electronic signatures needed and addresses for different people, so these could be services.

image

In all likelihood, we would NEVER break the solution(s) into this many micro services. As an example, addresses likely have a slightly different context in each solution and are better tied to services like the employee service and employer services rather than a separate service that is able to keep context

Pros and Cons

While micro services are being touted in many articles as THE solution for all applications, silver bullets don’t exist, as there are no werewolves to kill in your organization. Micro services can solve a lot of pain points in the average Enterprise, but there is some preparation necessary to get there and you need to map it out and complete planning before implementing (I will focus on another article to go into more detail on implementation).

Pros

One of the main pros I see mentioned is the software is the right size with micro services. The fact you are implementing in terms of the smallest unit of business capability you can find means you have to separate the functionality out so it is very focused. This focus makes the application easier to maintain. In addition, a micro services architecture naturally enforces tight cohesion and loose coupling. Another benefit is you naturally have to develop the contract up front as you are releasing each service as a discreet product.

You also have the flexibility to choose the correct platform and language on a product by product (service by service) basis. The contract has to be implemented via standards for interoperability, but you are not tied into single technology. (NOTE: I would still consider limiting the number of technologies in use, even if there is some compromise, as it gets expensive in manpower having to maintain multiple technologies).

Micro services will each be in control of their own data and maintain their own domains. The small nature of the services will mean each domain is easy to maintain. It is also easier to get a business person focused on one capability at a time and “perfect” that capability. It goes without saying micro services work well in an Agile environment.

Micro services architecture also allows you the ability to scale each service independently. If your organization has adopted some form of cloud or virtual infrastructure, you will find it much easier to scale your services, as you simply add additional resources to the service.

Cons

Micro services is a paradigm shift. While the concepts are not radically different, you will have to force your development staff to finally implement some of the items they “knew” as theory but had not implemented. SOLID principles become extremely important when implementing a micro services architecture. If your development methodologies and staff are not mature enough, it will be a rather major paradigm shift. Even if they are, a shift of thinking is in order, as most shops I have encountered have a hard time viewing each class library as a product (Yes, even those who have adopted a package manager technology like NuGet).

There is a lot of research that is required to successfully implement a micro services architecture.

· Versioning – You can no longer simply change interfaces. You, instead, have to add new functionality and deprecate the old. This is something you should have been doing all along, but pretty much every developer I have met fudges this a good amount of the time. It is internal, so I can fix all of the compiler errors, no problem. This is why so many shops have multiple solutions with the same framework libraries referenced as code. You should determine your versioning strategy up front.

· URI Formatting – I am assuming a REST approach for your micro services when I include URI formatting, but

· API Management – When there are a few services, this need will not be as evident. As you start to decompose your services in to smaller services, it will become more critical. I would consider some type of API management solution, like Layer 7 (CA), Apigee, or others, as opposed to building the API management or relying on an Excel spreadsheet or internal app to remind you to set up the application correctly.

· Instrumentation and Monitoring – Face it, most of us are bad at setting the plumbing, but it becomes critical to determine where an error occurs in a micro services architecture. In theory, you will know where the problem is, because it is the last service deployed, but relying on this idea is dangerous.

· Deployment – As with the rest of the topics in this section, there is a lot of planning required up front when it comes to deployment. Deployment should be automated in the micro services world. But deployment is more than just pushing applications out, you need to have a rollback plan if something goes wrong. In micro services, each service has its own deployment pipeline, which makes things really interesting. Fortunately, there are tools to help you with build and release management, including parts of the Microsoft ALM server family, namely build server and release management.

In short, micro services are simple to create, but much harder to manage. If you do not plan out the shift, you will miss something.

Summary

This is the first article on micro services and focuses on some of the information I have learned thus far. As we go forward, I will explore the subject deeper from a Microsoft standpoint and include both development and deployment of services.

Peace and Grace,
Greg

Twitter: @gbworld

Solving the Microsoft Mahjong Classic Puzzle–January 9th


This particular puzzle was a bit of a b*tch. Despite being flagged easy, it takes quite a bit of time, largely due to the placement of a few tiles that requires a strategy that is a bit out of the norm. Here is my guide to solving the puzzle using the tranquility tile set. (If you are using another tile set, you can still use this post, but will have to go by position alone.

First, here is how it looks as you start.

Starting-Point

To explain the solution, I need to do 2 things:

  1. Identify the tiles in shorthand
  2. Grid off the board

Identifying the Tiles

In Mahjong, there are a group of tiles that can only match an identical tile; these are 3 suits (wheels (or dots), bamboo (or sticks) and numbers (or cracks)). the four winds (north, south, each and west) and a set of three dragons (red, green and white).

There are also two suits in which you can match any of the tiles: Seasons (spring, summer, winter, fall) and flowers (plum, orchid, bamboo and mum).

Wheels

The wheels are easy to identify. They have a dot or wheel. You count the number of wheels to determine the number of the tile. I label these with a lower case w after the number: 1w – 9w.

Bamboo

Bamboo is also easy, as you count the number of bamboo fronds. The exception is the 1 of bamboo, which looks like a bird. I label these with a lower b after the number: 1b – 9b.

Numbers

Numbers are a bit different unless you read Chinese. Here is how to identify them. 1, 2 and 3 of numbers have 1, 2 and 3 vertical lines across the middle of the top of the tile. The 4 looks like an arabic W. 5 is the most complex symbol. It has an up and down stroke followed by what appears to be a lower case h with a cross at the top and underline. 6 is a stick figure without the body segment. 7 appears a bit like a t, 8 like a broken upside down v and 9 is like a cursive r. I label numbers with a lower case n: 1n – 9n. Some of the numbers are shown below.

numbers

Winds

Winds are easy to identify, as they have the initial of the wind direction in black in the upper left corner of the tile: N, S, E, W.

Dragons

There are three dragons: red, green and white. The red dragon is a stick with a circle colored red. The green dragon appears like a green bush or a green flame. And the white dragon is a blue square. I have a red and blue dragon shown below, with the numbers all ready on the graphic:

dragons

Seasons

Seasons are found on green tiles. You can match any season with any other season.

seasons

Flowers

 

Gridding off the board

On this board, there are 5 rows from top to bottom, starting from the 2 of wheels (or dots) in the middle top. and ending with row5 which has a south wind on the far left side. I am going to label these rows 1 through 5.

Rows

There are 13 rows, which I label A through M. Leaving the rows in place, here are the columns.

Rows-&-Columns-&-Half

Notice that some of the tiles, like the 2 of bamboo (2b) on row 4 (partially hidden by the West wind (Wwind). These are half columns, which I label by adding an X. In particular, this is column Ax. Here is a shot showing the rows and the half columns

Rows-&-Halfs

And here is rows, columns and half columns.

Rows-&-Columns-&-Half

Designating Position

To designate a position, I also need to indicate the level of the tile. On this board, the 4 tiles on row 4 (starting with a flower and ennding with the West wind are at level 4. The North wind located at 5-M is at level 2 (there is one tile underneath it.

My location designator consists of row, column and level, with hyphens in between. The North wind at the lower left, for example, is 5-M-2. I will add the tile name (or type in flowers or seasons) in front of the location. Here is a map of a few positions, so you can understand the system.

Locations

Solution

As pointed out on the Mahjong FaceBook page, you have to focus on the bottom row. This is normal when you are solving a Mahjong puzzle, as you should focus on three areas.

  1. long lines
  2. Tall stacks
  3. tiles that cover more than one tile

Not focusing on these areas is bad. This particular puzzle is a bit insidious as there are some tiles that only have one pair and are located under other tiles that have only one pair. This makes it hard to clear off the board. Here are the areas to look out for:

North on top of north
North-on-North

2 6 of bamboo (only pair of them in game) on top of row 5
2-6-of-bamboo

Two white dragons on row 5 at level 2
2-white-dragons

Two red dragons on the bottom level of row 5:
2-red-dragons

The general strategy here (as mentioned on the Facebook page) is to start row 5, level 3 from the left side, then level 2 from the right side and level 1 from the left side. This is not 100% true, as you will see.

The first step is to start clearing row 5, level 3, from the left. There is a 5n here at 5-B-3. You can see another 5n under one of the flowers. The designation for clearing this tile, fitting the scheme is 5n 2-J-2 5-B-3. To get to this tile, you have to clear flowers, which are designated flowers 2-Ix-3 3-G-4.

First-2-moves

You then continue on the left side of row 3 for 2 more moves. Here are the first four moves. Counter intuitive moves are marked with a starStar.

Move Tile Location 1 Location 2
1 flowers 2-Ix-3 4-G-3
2 5n 2-J-2 5-B-3
3 Star 1n 2-C-2 5-C-3
4 Wd 1-J-1 5-D-3

The 3rd move, above, is counter intuitive. You would think you should clear off 5-B-3 with 3-K-2, but I found this move makes the puzzle unsolvable. At the end of the three moves, you have a board that looks like this:
board-after-4-moves

It should be clear enough how to use the moves table now. Here are the moves to clear off the rest of the top of row 5. Note that we will start from the left side now (purple), as we will with row 2 (blue). Also note this table has some rows that say ALL, followed by a number of pairs. This means clear everything of that type off the board.

Move Tile Location 1 Location 2
5 Nwind 4-H-4 5-M-2
6 Wwind ALL (3 pairs)  
9 Rd 4-F-3 4-I-4
10 4n 2-Hx-4 4-E-2
11 Swind ALL (2 pairs)  
13 EWind 3-I-4 5-k-3
14 9n 1-I-2 4-Bx-2
15 8n 3-B-3 4-Bx-1
16 4b 1-I-1 4-Ax-2
17 2w 2-Gx-3 4-G-3
18 3w ALL (2 pairs)  
20 7w 3-Ax-1 4-L-1
21 5b 1-H-1 3-I-3
22 Mug Nwind 2-G-2 5-I-3
23 4w 3-Ex-1 5-H-3
24 6w 2-F-1 2-K-1
25 5w 2-J-1 4-J-3
26 3b 4-I-3 5-G-3
27 1n 3-K-2 5-F-3
28 Wd 3-J-2 5-E-3

Move 22 (beer icon) is also critical, as there is a north on a north in position 2G (at level 2 and 1). Exchanging either of these north winds for the north wind at 4-G-2 At this point in time, the top of row 5 should be cleared, as shown below:
After-28-moves

The next step is to clear off both row 4 and 5. Because of the 6 of bamboo (6b), you have to start row 5, level 2 from the right. Here are the steps to clear row 5, level 2 (purple) and much of row 4, level 2 (blue). 

Move Tile Location 1 Location 2
29 1b ALL (2 pairs)  
31 9w 3-G-3 4-G-3
32 Nwind 1-G-1 4-G-2
33 7b 5-K-1 5-L-2
34 Ewind 3-Kx-1 4-Ax-1
35 6n 2-A-2 3-G-2
36 2b 2-Hx-2 3-Jx-1
37 8b 3-Bx-1 3-I-2
38 Swind 3-H-2 4-H-2
39 1w 2-I-2 5-J-2
40 Seasons ALL (2 pairs)  
42 3n 4-J-2 5-H-2
43 9b 4-K-2 5-G-2
44 4n 3-G-2 5-F-2
45 Gd 2-H-2 5-E-2
46 6b 5-B-2 5-D-2
47 1w 3-Ix-1 5-C-2

Here is the board after these moves:
Near-Solving

You should now be able to solve this without my help, but here are the moves.

Move Tile Location 1 Location 2
48 flower 2-H-1 5-B-1
49 2w 1-G-1 4-K-1
50 Gd 4-J-1 5-C-1
51 2n ALL (2 pairs)  
53 8w 4-H-1 5-E-1
54 3n 5-F-1 5-K-1
55 Rd 5-G-1 5-J-1
56 7n 5-H-1 5-I-1
57 3b 3-Hx-1 4-G-1
58 2b 2-A-1 3-Gx-1

Hope this helps.

Peace and Grace,
Greg

Twitter: @gbworld

74 “School Shootings” since Sandy Hook. Really?


I saw a posting from the Examiner posted that school shootings are on the rise. Here is their chart:

fed9adb5d7b28463a455ba0960168410[1]

When I see a chart this far out of skew, I start to wonder. Are the numbers being charted the same as previous numbers. In other words, what are we calling a school shooting?

I also saw this article that stated there were 74 school shootings since Sandy Hook. You can also see a map in the Washington Post.“Of the shootings, 35 took place at a college or university, while 39 took place in K-12 schools”. This is even more dramatic, as the Examiner only stated there were 7 “school shootings” last year.

Here is the table from the link that posted there were 74 school shootings.

#

Date

City

State

School Name

School Type

1. 1/08/2013 Fort Myers FL Apostolic Revival Center Christian School K-12
2. 1/10/2013 Taft CA Taft Union High School K-12
3. 1/15/2013 St. Louis MO Stevens Institute of Business & Arts College/University
4. 1/15/2013 Hazard KY Hazard Community and Technical College College/University
5. 1/16/2013 Chicago IL Chicago State University College/University
6. 1/22/2013
Houston TX Lone Star College North Harris Campus College/University
7. 1/31/2013 Atlanta GA Price Middle School K-12
8. 2/1/2013 Atlanta GA Morehouse College College/University
9. 2/7/2013 Fort Pierce FL Indian River St. College College/University
10. 2/13/2013 San Leandro CA Hillside Elementary School K-12
11. 2/27/2013 Atlanta GA Henry W. Grady HS K-12
12. 3/18/2013 Orlando FL University of Central Florida College/University
13. 3/21/2013 Southgate MI Davidson Middle School K-12
14. 4/12/2013 Christianburg VA New River Community College College/University
15. 4/13/2013 Elizabeth City NC Elizabeth City State University College/University
16. 4/15/2013 Grambling LA Grambling State University College/University
17. 4/16/2013 Tuscaloosa AL Stillman College College/University
18. 4/29/2013 Cincinnati OH La Salle High School K-12
19. 6/7/2013 Santa Monica CA Santa Monica College College/University
20. 6/19/2013 W. Palm Beach FL Alexander W. Dreyfoos School of the Arts K-12
21. 8/15/2013 Clarksville TN Northwest High School K-12
22. 8/20/2013 Decatur GA Ronald E. McNair Discovery Learning Academy K-12
23. 8/22/2013 Memphis TN Westside Elementary School K-12
24. 8/23/2013 Sardis MS North Panola High School K-12
25. 8/30/2013 Winston-Salem NC Carver High School K-12
26. 9/21/2013 Savannah GA Savannah State University College/University
27. 9/28/2013 Gray ME New Gloucester High School K-12
28. 10/4/2013 Pine Hills FL Agape Christian Academy K-12
29. 10/15/2013 Austin TX Lanier High School K-12
30. 10/21/2013 Sparks NV Sparks Middle School K-12
31. 11/1/2013 Algona IA Algona High/Middle School K-12
32. 11/2/2013 Greensboro NC North Carolina A&T State University College/University
33. 11/3/2013 Stone Mountain GA Stephenson High School K-12
34. 11/21/2013 Rapid City SD South Dakota School of Mines & Technology College/University
35. 12/4/2013 Winter Garden FL West Orange High School K-12
36. 12/13/2013 Arapahoe County CO Arapahoe High School K-12
37. 12/19/2013 Fresno CA Edison High School K-12
38. 1/9/2014 Jackson TN Liberty Technology Magnet HS K-12
39. 1/14/2014 Roswell NM Berrendo Middle School K-12
40. 1/15/2014 Lancaster PA Martin Luther King Jr. ES K-12
41. 1/17/2014 Philadelphia PA Delaware Valley Charter HS K-12
42. 1/20/2014 Chester PA Widener University College/University
43. 1/21/2014 West Lafayette IN Purdue University College/University
44. 1/24/2014 Orangeburg SC South Carolina State University College/University
45. 1/28/2014 Nashville TN Tennessee State University College/University
46. 1/28/2014 Grambling LA Grambling State University College/University
47. 1/30/2014 Palm Bay FL Eastern Florida State College College/University
48. 1/31/2014 Phoenix AZ Cesar Chavez High School K-12
49. 1/31/2014 Des Moines IA North High School K-12
50. 2/7/2014 Bend OR Bend High School K-12
51. 2/10/2014 Salisbury NC Salisbury High School K-12
52. 2/11/2014 Lyndhurst OH Brush High School K-12
53. 2/12/2014 Jackson TN Union University College/University
54. 2/20/2014 Raytown MO Raytown Success Academy K-12
55. 3/2/2014 Westminster MD McDaniel College College/University
56. 3/7/2014 Tallulah LA Madison High School K-12
57. 3/8/2014 Oshkosh WI University of Wisconsin – Oshkosh College/University
58. 3/21/2014 Newark DE University of Delaware College/University
59. 3/30/2014 Savannah GA Savannah State University College/University
60. 4/3/2014 Kent OH Kent State University College/University
61. 4/7/2014 Roswell NM Eastern New Mexico University-Roswell College/University
62. 4/11/2014 Detroit MI East English Village Preparatory Academy K-12
63. 4/21/2014 Griffith IN St. Mary Catholic School K-12
64. 4/21/2014 Provo UT Provo High School K-12
65. 4/26/2014 Council Bluffs IA Iowa Western Community College College/University
66. 5/2/2014 Milwaukee WI Marquette University College/University
67. 5/3/2014 Everett WA Horizon Elementary School K-12
68. 5/4/2014 Augusta GA Paine College College/University
69. 5/5/2014 Augusta GA Paine College College/University
70. 5/8/2014 Georgetown KY Georgetown College College/University
71. 5/8/2014 Lawrenceville GA Georgia Gwinnett College College/University
72. 5/21/2014 Milwaukee WI Clark Street School K-12
73. 6/5/2014 Seattle WA Seattle Pacific University College/University
74. 6/10/2014 Troutdale OR Reynolds High School K-12

Here is a map for those who are visual.

School-Shootings-USA-Mapped[1]

Investigation

I am not one to take something at face value simply because someone states it, so I Googled each of the “school shootings” above.

Before starting with my findings, you have to cross off either #68 or #69 on the list, as there were not two shootings in two consecutive days at Paine College. You should only count one of the incidents, as there was only one incident. That leaves us with 73 incidents to investigate.

Of the shootings mentioned, at least 5 did not even occur on a school campus at all. Here is the list:

There is also a shooting that took place in a mall that houses a community college in addition to other tenants. Not technically a school shooting. This takes us down to a maximum of 67 “incidents” that took place on a school campus.

The next thing we have to do is define what a “school shooting” means. Does it mean a madman hunting down students only? Do we include gang related offenses or disputes, in which a specific person was targeted and just happened to be shot on campus. Do we including incidents in school parking lots that are not related to the school at all (including a mother gunned down by her estranged husband after dropping off kids and a gunman escaping police who had a shoot out at a community college)? And do you include self-defense shootings, like the teacher shooting non-student assailants at Martin Luther King Jr Elementary School)? And what about things happening on a school yard after hours? Are these all “school shooting”? I would venture a no on most, if not all of these. Your mileage may vary.

Here is a listing of the shootings that are probably not what you would normally call a “school shooting”, by category:

Description Number Dead Gunmen dead Wounded
Off campus (non-school shooting) 6 6 1 3
Parking Lot 26 11 0 15
Suicides 8 8 0 8
Drug Related Shootings 2 2 0 1
Gang Shootings 2 0 0 2
Robberies 5 1 0 6
Self Defense Shootings 2 1 1 2
Accidental discharge 4 0 0 2
Fights/Disputes 34 16 2 22
Student with Gun, no shots fired 1 0 0 0
Shot by rifle from long distance 1 0 0 1

Now, let’s look at incidents that might be called a school shooting. First the targeted shootings.

And, finally, the mass murder type of shootings like Sandy Hook:

  • Shooting #19: Six people killed (including the gunman). June 7, 2013, shooter kills his brother and father and then goes to Santa Monica college where he has a shootout with police in the library.  Six dead (including the gunman), four injured. Three of the dead (including the gunman) died on the campus.
  • Shooting #73: 1 person killed and two wounded. Shooter stopped by pepper spray while reloading his handgun.

NOTE: I am not going to glorify the shooters by calling them anything other than shooters.

Conclusion

Here is how things stack up:

  • Incidents: 73
  • Incidents with injuries or death: 65
  • Incidents that should be excluded
    • Incidents completely off campus: 6
    • Incidents in parking lots: 26
    • Robbery incidents: 5
    • Fight/Dispute incidents: 34
    • Suicides: 8
    • Accidental discharge: 4
    • Gun on campus, no shooting: 1
    • Self-defense shootings: 2
  • Incidents not Classified in above: 6 (including Reynold’s High School yesterday)
  • Incidents with mass shooters and random targets: 2 (only 1 targeting a school at start)

  • Looking at the list, we had 6 incidents in the last year and a half that we might classify as a “school shooting” (person comes to school with intent to harm, especially if shooting random victims) . This is close to the Examiner’s numbers. Of those, there are only 2 that could have been like a Sandy Hook (although both were at Universities and the shooters were stopped or killed rather quickly).

Is school shooting on the rise in 2013? If you look at Wikipedia, you see the following on incidents:

Year

Incidents Deaths Injuries
1999 5 16 33
2000 4 4 2
2001 4 3 19
2002 3 4 3
2003 3 5 2
2004 4 1 5
2005 3 11 10
2006 6 10 8
2007 4 36 29
2008 9 15 27
2009 7 2 13
2010 10 14 21
2011 14 50 31
2012 31 25 33
2013 31 19 38

The problem is the Wikipedia table suffers from the same problem as the earlier arguments, or same two problems:

  1. Some incidents that are not “school shootings”, including some that happen at night, incidents in parking lots, fights and some incidents not even on school property, show up in the list.
  2. The data set is incomplete, as you can only find what is searchable on the Internet. You will naturally find more “incidents” this year, as the media is quick to call something a school shooting and is more apt to report on every “incident”.

But it does provide a completely different chart than the Examiner:

image

What I like here is the fact the chart is all over the board shows how sporadic the data is and strongly suggests the data set is more and more incomplete as we move back in time. It also shows spikes when mass murder school incidents have happened and illustrates how rare they really are.

Summary

Here is how I look at this.

  1. It is tragic when people are killed. Especially tragic when it is children and even more so when it is Elementary School children.
  2. The number of ‘’”incidents” may be on the rise, but, if so, it is only slightly. The majority of press on “rising incidents” uses “incidents” we would not normally classify as a “school shooting” (like man killed in a school parking lot at 2 AM after altercation). NOTE: This does not mean we should not do anything about it, but we should be sensible and not panic and knee jerk into another stupid direction.
  3. The number of mass murder type incidents is not on the rise. These, like Columbine and Sandy Hook, are the ones that should really scare us, as they are individuals intent on causing a huge amount of harm to innocent victims.

If we objectively look at the ‘’”problem”, we should notice that our children are not in real danger. School shootings are extremely rare incidents. When there is a school shooting, it is normally individuals targeting people they dislike for some reason, including bullying, gangs, bad grades, etc. In these instances, like any other assault or murder, it is an issue between two people and not some mass murdering clown.

As for the list that started the topic, I find it to be an unscientific bit of tripe. At best, it is an emotional argument crated by someone trying to show his emotions are justified. At worst, it is a bold faced lie. You decide.

Peace and Grace,
Greg

Twitter: @gbworld

Why Develop Using Gherkin?


I was having a conversation with David Lazar in the services wing of UST Global (my current company). During the conversation, we started talking about using SpecFlow and I detailed how I used Gherkin with SpecFlow to create a “paint by numbers” kit to drive offshore development.

As I sat and thought about it, I was reminded of a question someone once asked me about breaking down requirements which led to the question “why gherkin?” At the time, I am sure I came up with a clever tactical answer to the question, but now that I think about it more, the question is both tactical and strategic in nature.

To understand this, let’s look at specifics on how I break down requirements with me teams, when I am given some leeway on how the requirements get broken down.

Setting Up the Problem

I am going to start with the assumption that the requirements documents suck. I know this is not always the case, but I find it more likely than not that the requirements are insufficient to get the job done. This has led many company managers to the belief that there is something inherently wrong with offshoring, but the real problem is not so much where the work is being done, but the definition. Let me rathole for a second to explain this.

Company A sends work offshore and it comes back with less than stellar results. When the same work is sent inside the building, the results are much better. So, there is an assumption that it works onshore but not offshore.

But I will contend it IS NOT working onshore either. Things are still getting broken, but the feedback loop is generally much shorter as the business owner can walk over to the development pit and say “what were you thinking?” All of these trips are forgotten when comparing offshore to onshore. In addition, the employees have greater domain knowledge than the offshore team, which reduces the problem domain.

Let’s take this a step farther and compare onshore contracting to offshore. We now have less domain knowledge than employees, unless we are paying top dollar. We still have a short feedback loop, however, so this seems superior.

ASIDE: I have built and led teams in various countries and each has its challenges, As India oft ends up the whipping boy, let’s look at India. In Indian culture, there is a push to get to a higher level title. For this reason, you rarely see very senior resources. The bulk of any team will be somewhere between Freshers (fresh out of college, less than 1 year experience) to Senior Developer (approximately 5 years, maybe 7), with much of the team in the 1-3 years experience range. This is part of why the rates are so low, but it is a trade off. With lower levels of experience, you need more firm requirements.

The point here is the problem is not necessary offshore, it is just exacerbated. Let’s look at an example:

Requirement: At Elite Last Minute travel, a successful flight out is described as:

  1. Client is picked up at home in a limo
  2. Client is delivered to airport and given all pertinent documents by the travel consultant
  3. Client’s luggage is checked into his flight
  4. Client is escorted to the plane
  5. Client is flown to destination
  6. Client is met at destination by a limo
  7. Client is driven to hotel and checked in

Pretty straightforward, right? But what if the client wants to see the Lincoln Memorial (Washington DC) and is flown to Miami, Florida and checked into a hotel there. By the requirements, this would constitute a successful flight out.

This example is a bit on the absurd side, as it seems any idiot should know that the destination is part of the equation for success. But consider this: Once we gain tribal knowledge in a domain, we start to assume it is self-evident, as well. Unfortunately, it is not. Add culture changes into the mix and you might find the assumption leads to disaster.

Breaking down Requirements – Defining Done

The first step we have to go through is breaking down requirements to make sure done is properly defined. Let’s start with a simple requirement:

3.1 Multiplication
All multiplication in the system must return the correct answer

In an Agile environment, this is generally started by stating each requirement in terms of the user:

As an math idiot, I would like to have multiplication done for me, so I can look like a genius

Neither of these defines done, however, without the assumption the coder fully understands multiplication. To fully define done, we need to look at what some of the rules of multiplication are, Let’s say we start with the following:

  1. Multiplying two numbers is equivalent of the sum of the first number added to itself the number of times represented by the second number (okay, my ENglish sucks here, as this is on the fly)
    Example: 5 * 5 = 5 + 5 + 5 + 5 + 5 (there are five 5s in the additional side of the equation)
    Example 2: 2 *2 = 2 + 2 (there are two 2s in the additional side of the equation)
    Example: 5 * 2 = 5 + 5 (there are two 5s in the addition side of the equation)
    Example: 2 * 5 = 2 + 2 + 2 + 2 + 2 (there are five 2s in the addition side of the
  2. Multiplying any number times 0 results in zero (makes sense as the addition side would have zero {number value}s)
  3. Multiplying any number times 1 results in the number (also makes sense, as there is only one of the number in the “loop”)
  4. Multiplying any number other than 1 or 0 times the maximum value of an integer results in an error

This is not the complete rule set, but we can now break the problem down by these rules. I find the easiest way is to set up an Excel spreadsheet with inputs and outputs. For the above, I would use something like this:

Input A Input B Output
5 5 25
2 2 4
5 2 10
2 5 10
5 1 5
2 1 2
1 5 5
1 2 2
5 0 0
2 0 0
0 5 0
0 2 0
0 MAX 0
MAX 0 0
1 MAX MAX
MAX 1 MAX
2 MAX ERROR
5 MAX ERROR
MAX 2 ERROR
MAX 5 ERROR

Done is now much better defined. If we take our previous example, we can break down the additional clarification too). NOTE: The actual inputs and outputs are more complex and then get separated out based on the unit being tested.

Input (Desired Attraction) Output (Destination City)
Lincoln Memorial Washington DC
Disney World Orlando
Grand Canyon Flagstaff

If you have not caught this, we are creating acceptance criteria. It is one way of “defining done”.

But What About Gherkin

Gherkin is a language that helps us use the acceptance above. If each line represents a single criteria in the acceptance matrix (the tables above), we might end up with something like:

Given that I chose the Grand Canyon as my desired location
When I fly to a destination
Then I will arrive in Flagstaff

So why is this important? For the same reason that user stories are important. It is a form of ubiquitous language that can be shared between business and IT to ensure everyone is on the same page. Provided we either make each of the lines into users stories and Gherkin statements (or code the acceptance table into Gherkin), we now have a definition of done.

Gherkin adds another value to me, when I am using Spec Flow. I can use the Gherkin statements to produce test stubs that I can send offshore. I call this a paint by numbers kit, as I can open them up in the morning and make sure the right colors were painted in the right spots (ie, they filled the assumptions in the given method, the action in the when method and the test result in the then method(s)).

Summary

This is just a brief intro into quality, as subject I am going to explore in detail as the year goes on. And while this may not express it clearly, as it started with an ADD (or ADHD) moment, the important takeaways are these:

  • Business and IT need to be in alignment with language. Here I am using user stories and Gherkin as the ubiquitous (shared) language, but you can have others. Domain Driven Design, which I will focus on later this year, also deals with the ubiquitous language concept, although it is more concerned with modeling the domain than defining done.
  • Most offshoring problems are a combination of expectations (a different understanding of what junior and senior developer means) and incomplete requirements. Fortunately, when we are in the same office, we can walk over and talk and give immediate feedback (not true in offshore engagements)
  • User stories and Gherkin can be used to bridge the gap from improperly defined requirements to a proper understanding of what done looks like (not true in ALL cases, but it is a good start)

Peace and Grace,
Greg

Twitter: @gbworld

Big Thrill Rides


I saw a picture posted on Facebook about the X Scream on the Stratosphere in Las Vegas. Here is the picture:

This is a ride called insanity on top of the Stratosphere in Vegas. The tower is 1149 feet tall, with the deck up around 850 to 900 feet (from Wikipedia heights of the thrill rides.

Stratosphere Thrill Rides (Strip, Vegas)

There are 4 rides on the Stratosphere: Big Shot, Insanity, Sky Jump and X Scream.

Big Shot

The Big Shot fires you up at high speed from the top of the Stratosphere tower (at 1,081 feet, the highest thrill ride in the world). You can see this in the POV video below:

Insanity

Insanity hangs you over the edge of the tower. This is the one in the original picture. At 900 feet, it is the second highest thrill ride in the world. Here is a video:

Here is a POV shot of the ride:

Sky Jump

The Sky Jump mimics skydiving from the tower. It rolls you out and then feels like it is dropping you. At 855 feet, it is lower than the rest and Wikipedia does not have where it is on the list of highest thrill rides in the world. Here is a video from a wrist cam.

And here is one at night.

X Scream

X Scream drops you off the side of the tower. It is the third highest thrill ride in the world. Here is a video that shows the ride from the side:

Six Japanese tourists got stuck on the ride during a power failure in 2005.

Old Ride: High Roller (GONE)

This ride no longer exists, but it was the first thrill ride the Stratosphere had. It was at 9809 feet and the highest roller coaster in the world.

Other Thrill Rides

While these are not necessarily sitting on top of some tower somewhere, they are considered the best thrill rides in the world.

X2: Six Flags, Magic Mountain, Valencia, California

This is the first “4D” roller coaster. The ride has spinning seats to change the angle of the ride, so you can be moving forward but facing backwards, and vice versa.

SkyScreamer: Six Flags Over Texas, Arlington, Texas

A 400 foot tower swing.

Eejanaika, Fuji-Q Highland, Fujiyoshida, Japan

Another “4D” roller coaster, with a longer track. You can see the spinning seats in the off ride part of the video. (Turn down the volume on POV if hearing the videographer screaming annoys you):

 

Kingda Ka, Six Flags Great Adventure, Jackson, New Jersey

Tallest Ground Based Roller Coaster at 456 feet. Shoots you up at 128 MPH and then back down.

Formula Rossa, Ferrari World, United Arab Emerates

This Ferrari styled coaster is the fastest in the world at 149 MPH. Here is a POV, but it does not seem all that fast on the video, as the ride does not have as many heavy drops. You see the speed a bit better on the later non-POV section.

The Joker, Six Flags México, Ciudad de México, D.F., Mexico

The park, in the southwest part of Mexico City, has a variety of rides. But the spinning Joker coaster is one of the favorites.

Hope you enjoyed this.

Peace and Grace,
Greg

Twitter: @gbworld

DRM in Consumer Products? Bad Idea for Consumers


I just saw today where Green Mountain decided to include DRM in its next line of Keurig single cup brewers. I am sure they are going to sell it as a protection for consumers against knock-off cups, but the reality is this is a move to protect the company from losing part of the licensing money stream and not a protective measure for consumers.  And they are not the only ones.

Keurig


Image from SlashGear, where I saw the announcement.

Here is how I envision this working. Each K-cup will have a cheap chip, like an RFID with an encrypted code on it. More than likely, to make sure future licensees cups work in all single cup makers, the encrypted key code will be numeric when the encrypted value is correct and the brewer will refuse to brew one that fits either category below:

  1. No chip
  2. Chip that does not decrypt to correct types of values

The brilliance of this, from a business standpoint, is anyone who breaks the encryption to use their machine with non-licensed cups is guilty of committing a crime under different DRM laws (brewing a cup of unlicensed coffee may even mean you are guilty of a felony under some versions of DRM laws). And, anyone who breaks the encryption to make unlicensed cups work is guilty of a DRM violation allowing the government to shut them down.

In short: You will only drink more expensive licensed coffee.

With the prevalence of other single cup brewers on the market, I hope this one causes enough consumer backlash to get them to turn around on this bad idea. It is not in the best interest of consumers to have a machine that only allows coffee cups that benefit Green Mountain in some way. You, as a consumer, should have the choice to brew what you want. And if you pick “inferior” coffee, so be it.

I own a Keurig. I only buy licensed cups, or I use the licensed basket ($19.99 ouch!) to brew the ground coffee I want to brew. But if my brewer breaks down and I have a choice of a DRMed Keurig 2.0 or a competitor, I am going to go for the competitor. End of story.

Renault

Renault has taken this even further with their electronic car Zoe.

From Boing Boing article “Renault creates a brickable car”.

If you buy a new Zoe, you can only rent the battery. If you miss a payment, they can make it so your vehicle is a brick until you pay. The problem is hackers could potentially get into this system and cause serious problems, as well. I am not sure if the battery is DRMed to the point you cannot buy an unlicensed competitor’s battery (most likely there is a patent on the system now that protects them so it is unnecessary), but Renault, like Keurig, has created a product that protects their revenue stream.

DRM: How it works

How do these types of products protect the company? Certainly the consumer can do what he wants with the merchandise he pays for, right?

Yes … and no. Under SRM laws, if protections are added to a system, like encryption, breaking the encryption scheme is against the law. As an example, copyright fair use laws allow you to make personal copies of copyrighted material you purchase. For example, you can photocopy a book you own, or make copies of your CD collection.

But, while you can legally copy things like DVDs under copyright law, DRM law makes it illegal to break copy protection schemes to make copies. Any software created in the US that breaks copy protection schemes is illegal. In the late 90s, entertainment companies came up with a scheme called CSS to protect DVDs. In 1999, however, a program called DeCSS was created to decrypt the copy protection on DVDs. The stated purpose of DeCSS was to get DVDs to play on Linux machines, but it was also used by pirates to decrypt and make pirated copies.

Under DRM laws in various countries, coding a program that breaks encryption on materials like DVDs is illegal, so the one of the programmers, Jon Lech Johansen, was arrested in Norway for helping create the program. The scary thing here is this type of law has been used to stop a great many software advances by making a practice that can be used for very beneficial purposes illegal.

If this were the extent of where DRM has gone, it would be scary enough. But the laws go farther. If you use a product to copy a DVD, even for your own collection (copy a kid’s DVD for example, so they do not destroy the original), you have broken the law. And, under some DRM laws you can even be arrested and charged with a felony. Imagine that, it is just as serious to make a copy of a kid’s movie as shooting someone and killing them.

Unintended Consequences of Government?

The stated purpose of DRM laws was to protect artists and other copyright holders from pirates. But they ended up protecting media giants more than the actual artists, who continue to get a much smaller portion of the profits than the giants.

Despite the intent, if you place the proper code in any device, you can protect it from being reverse engineered or “decrypted” (actual decryption or otherwise) under a great many DRM laws.

What this means is you now may buy a product but leave control over how it is used in the hands of the manufacturer.

There Should Be a Law, Right?

I am not sure one bad law to combat a worse law is the proper reaction. Instead, I think people should inform there friends and blog readers, etc, about the potential dangers of DRM in products and get more people to vote with their wallets.

In the case of Zoe, the sales of the car were about 10,000 in 2013, around 1/5th of their target of 50,000 in sales. I am not sure if the DRM battery caused this, was a contributing factor, or merely something people like me are concerned about. If DRM contributed to the slow sales then I think the market has spoken saying “you don’t own the car, I do”.

In the case of Green Mountain, I am not sure what will happen. The market may be wowed by the newer features and accept the extra payment to Green Mountain for every cup of coffee, simply for brewing it in a Green Mountain Keurig machine. In other words, the inconvenience of only being able to brew more expensive, licensed coffee may be secondary to brewing a larger cup (or other planned enhancements). If I were a competitor, I would use this as an opportunity to gain market share, as I am sure some people will be appalled by the anti-competitive measure and spank Green Mountain for taking control of what they can brew.

Summary

I think DRM is necessary in some instances, like protecting internal documents that are private property of a company. When it moves out into the public, and restricts consumer choice to create additional profits for corporations, I am not in support of the idea. I do, however, think the market should decide if the intrusion is warranted. If any law is passed it should be one to inform the consumer the DRM exists in the product.

Peace and Grace,
Greg

Twitter: @gbworld