74 “School Shootings” since Sandy Hook. Really?


I saw a posting from the Examiner posted that school shootings are on the rise. Here is their chart:

fed9adb5d7b28463a455ba0960168410[1]

When I see a chart this far out of skew, I start to wonder. Are the numbers being charted the same as previous numbers. In other words, what are we calling a school shooting?

I also saw this article that stated there were 74 school shootings since Sandy Hook. You can also see a map in the Washington Post.“Of the shootings, 35 took place at a college or university, while 39 took place in K-12 schools”. This is even more dramatic, as the Examiner only stated there were 7 “school shootings” last year.

Here is the table from the link that posted there were 74 school shootings.

#

Date

City

State

School Name

School Type

1. 1/08/2013 Fort Myers FL Apostolic Revival Center Christian School K-12
2. 1/10/2013 Taft CA Taft Union High School K-12
3. 1/15/2013 St. Louis MO Stevens Institute of Business & Arts College/University
4. 1/15/2013 Hazard KY Hazard Community and Technical College College/University
5. 1/16/2013 Chicago IL Chicago State University College/University
6. 1/22/2013
Houston TX Lone Star College North Harris Campus College/University
7. 1/31/2013 Atlanta GA Price Middle School K-12
8. 2/1/2013 Atlanta GA Morehouse College College/University
9. 2/7/2013 Fort Pierce FL Indian River St. College College/University
10. 2/13/2013 San Leandro CA Hillside Elementary School K-12
11. 2/27/2013 Atlanta GA Henry W. Grady HS K-12
12. 3/18/2013 Orlando FL University of Central Florida College/University
13. 3/21/2013 Southgate MI Davidson Middle School K-12
14. 4/12/2013 Christianburg VA New River Community College College/University
15. 4/13/2013 Elizabeth City NC Elizabeth City State University College/University
16. 4/15/2013 Grambling LA Grambling State University College/University
17. 4/16/2013 Tuscaloosa AL Stillman College College/University
18. 4/29/2013 Cincinnati OH La Salle High School K-12
19. 6/7/2013 Santa Monica CA Santa Monica College College/University
20. 6/19/2013 W. Palm Beach FL Alexander W. Dreyfoos School of the Arts K-12
21. 8/15/2013 Clarksville TN Northwest High School K-12
22. 8/20/2013 Decatur GA Ronald E. McNair Discovery Learning Academy K-12
23. 8/22/2013 Memphis TN Westside Elementary School K-12
24. 8/23/2013 Sardis MS North Panola High School K-12
25. 8/30/2013 Winston-Salem NC Carver High School K-12
26. 9/21/2013 Savannah GA Savannah State University College/University
27. 9/28/2013 Gray ME New Gloucester High School K-12
28. 10/4/2013 Pine Hills FL Agape Christian Academy K-12
29. 10/15/2013 Austin TX Lanier High School K-12
30. 10/21/2013 Sparks NV Sparks Middle School K-12
31. 11/1/2013 Algona IA Algona High/Middle School K-12
32. 11/2/2013 Greensboro NC North Carolina A&T State University College/University
33. 11/3/2013 Stone Mountain GA Stephenson High School K-12
34. 11/21/2013 Rapid City SD South Dakota School of Mines & Technology College/University
35. 12/4/2013 Winter Garden FL West Orange High School K-12
36. 12/13/2013 Arapahoe County CO Arapahoe High School K-12
37. 12/19/2013 Fresno CA Edison High School K-12
38. 1/9/2014 Jackson TN Liberty Technology Magnet HS K-12
39. 1/14/2014 Roswell NM Berrendo Middle School K-12
40. 1/15/2014 Lancaster PA Martin Luther King Jr. ES K-12
41. 1/17/2014 Philadelphia PA Delaware Valley Charter HS K-12
42. 1/20/2014 Chester PA Widener University College/University
43. 1/21/2014 West Lafayette IN Purdue University College/University
44. 1/24/2014 Orangeburg SC South Carolina State University College/University
45. 1/28/2014 Nashville TN Tennessee State University College/University
46. 1/28/2014 Grambling LA Grambling State University College/University
47. 1/30/2014 Palm Bay FL Eastern Florida State College College/University
48. 1/31/2014 Phoenix AZ Cesar Chavez High School K-12
49. 1/31/2014 Des Moines IA North High School K-12
50. 2/7/2014 Bend OR Bend High School K-12
51. 2/10/2014 Salisbury NC Salisbury High School K-12
52. 2/11/2014 Lyndhurst OH Brush High School K-12
53. 2/12/2014 Jackson TN Union University College/University
54. 2/20/2014 Raytown MO Raytown Success Academy K-12
55. 3/2/2014 Westminster MD McDaniel College College/University
56. 3/7/2014 Tallulah LA Madison High School K-12
57. 3/8/2014 Oshkosh WI University of Wisconsin – Oshkosh College/University
58. 3/21/2014 Newark DE University of Delaware College/University
59. 3/30/2014 Savannah GA Savannah State University College/University
60. 4/3/2014 Kent OH Kent State University College/University
61. 4/7/2014 Roswell NM Eastern New Mexico University-Roswell College/University
62. 4/11/2014 Detroit MI East English Village Preparatory Academy K-12
63. 4/21/2014 Griffith IN St. Mary Catholic School K-12
64. 4/21/2014 Provo UT Provo High School K-12
65. 4/26/2014 Council Bluffs IA Iowa Western Community College College/University
66. 5/2/2014 Milwaukee WI Marquette University College/University
67. 5/3/2014 Everett WA Horizon Elementary School K-12
68. 5/4/2014 Augusta GA Paine College College/University
69. 5/5/2014 Augusta GA Paine College College/University
70. 5/8/2014 Georgetown KY Georgetown College College/University
71. 5/8/2014 Lawrenceville GA Georgia Gwinnett College College/University
72. 5/21/2014 Milwaukee WI Clark Street School K-12
73. 6/5/2014 Seattle WA Seattle Pacific University College/University
74. 6/10/2014 Troutdale OR Reynolds High School K-12

Here is a map for those who are visual.

School-Shootings-USA-Mapped[1]

Investigation

I am not one to take something at face value simply because someone states it, so I Googled each of the “school shootings” above.

Before starting with my findings, you have to cross off either #68 or #69 on the list, as there were not two shootings in two consecutive days at Paine College. You should only count one of the incidents, as there was only one incident. That leaves us with 73 incidents to investigate.

Of the shootings mentioned, at least 5 did not even occur on a school campus at all. Here is the list:

There is also a shooting that took place in a mall that houses a community college in addition to other tenants. Not technically a school shooting. This takes us down to a maximum of 67 “incidents” that took place on a school campus.

The next thing we have to do is define what a “school shooting” means. Does it mean a madman hunting down students only? Do we include gang related offenses or disputes, in which a specific person was targeted and just happened to be shot on campus. Do we including incidents in school parking lots that are not related to the school at all (including a mother gunned down by her estranged husband after dropping off kids and a gunman escaping police who had a shoot out at a community college)? And do you include self-defense shootings, like the teacher shooting non-student assailants at Martin Luther King Jr Elementary School)? And what about things happening on a school yard after hours? Are these all “school shooting”? I would venture a no on most, if not all of these. Your mileage may vary.

Here is a listing of the shootings that are probably not what you would normally call a “school shooting”, by category:

Description Number Dead Gunmen dead Wounded
Off campus (non-school shooting) 6 6 1 3
Parking Lot 26 11 0 15
Suicides 8 8 0 8
Drug Related Shootings 2 2 0 1
Gang Shootings 2 0 0 2
Robberies 5 1 0 6
Self Defense Shootings 2 1 1 2
Accidental discharge 4 0 0 2
Fights/Disputes 34 16 2 22
Student with Gun, no shots fired 1 0 0 0
Shot by rifle from long distance 1 0 0 1

Now, let’s look at incidents that might be called a school shooting. First the targeted shootings.

And, finally, the mass murder type of shootings like Sandy Hook:

  • Shooting #19: Six people killed (including the gunman). June 7, 2013, shooter kills his brother and father and then goes to Santa Monica college where he has a shootout with police in the library.  Six dead (including the gunman), four injured. Three of the dead (including the gunman) died on the campus.
  • Shooting #73: 1 person killed and two wounded. Shooter stopped by pepper spray while reloading his handgun.

NOTE: I am not going to glorify the shooters by calling them anything other than shooters.

Conclusion

Here is how things stack up:

  • Incidents: 73
  • Incidents with injuries or death: 65
  • Incidents that should be excluded
    • Incidents completely off campus: 6
    • Incidents in parking lots: 26
    • Robbery incidents: 5
    • Fight/Dispute incidents: 34
    • Suicides: 8
    • Accidental discharge: 4
    • Gun on campus, no shooting: 1
    • Self-defense shootings: 2
  • Incidents not Classified in above: 6 (including Reynold’s High School yesterday)
  • Incidents with mass shooters and random targets: 2 (only 1 targeting a school at start)

  • Looking at the list, we had 6 incidents in the last year and a half that we might classify as a “school shooting” (person comes to school with intent to harm, especially if shooting random victims) . This is close to the Examiner’s numbers. Of those, there are only 2 that could have been like a Sandy Hook (although both were at Universities and the shooters were stopped or killed rather quickly).

Is school shooting on the rise in 2013? If you look at Wikipedia, you see the following on incidents:

Year

Incidents Deaths Injuries
1999 5 16 33
2000 4 4 2
2001 4 3 19
2002 3 4 3
2003 3 5 2
2004 4 1 5
2005 3 11 10
2006 6 10 8
2007 4 36 29
2008 9 15 27
2009 7 2 13
2010 10 14 21
2011 14 50 31
2012 31 25 33
2013 31 19 38

The problem is the Wikipedia table suffers from the same problem as the earlier arguments, or same two problems:

  1. Some incidents that are not “school shootings”, including some that happen at night, incidents in parking lots, fights and some incidents not even on school property, show up in the list.
  2. The data set is incomplete, as you can only find what is searchable on the Internet. You will naturally find more “incidents” this year, as the media is quick to call something a school shooting and is more apt to report on every “incident”.

But it does provide a completely different chart than the Examiner:

image

What I like here is the fact the chart is all over the board shows how sporadic the data is and strongly suggests the data set is more and more incomplete as we move back in time. It also shows spikes when mass murder school incidents have happened and illustrates how rare they really are.

Summary

Here is how I look at this.

  1. It is tragic when people are killed. Especially tragic when it is children and even more so when it is Elementary School children.
  2. The number of ‘’”incidents” may be on the rise, but, if so, it is only slightly. The majority of press on “rising incidents” uses “incidents” we would not normally classify as a “school shooting” (like man killed in a school parking lot at 2 AM after altercation). NOTE: This does not mean we should not do anything about it, but we should be sensible and not panic and knee jerk into another stupid direction.
  3. The number of mass murder type incidents is not on the rise. These, like Columbine and Sandy Hook, are the ones that should really scare us, as they are individuals intent on causing a huge amount of harm to innocent victims.

If we objectively look at the ‘’”problem”, we should notice that our children are not in real danger. School shootings are extremely rare incidents. When there is a school shooting, it is normally individuals targeting people they dislike for some reason, including bullying, gangs, bad grades, etc. In these instances, like any other assault or murder, it is an issue between two people and not some mass murdering clown.

As for the list that started the topic, I find it to be an unscientific bit of tripe. At best, it is an emotional argument crated by someone trying to show his emotions are justified. At worst, it is a bold faced lie. You decide.

Peace and Grace,
Greg

Twitter: @gbworld

Why Develop Using Gherkin?


I was having a conversation with David Lazar in the services wing of UST Global (my current company). During the conversation, we started talking about using SpecFlow and I detailed how I used Gherkin with SpecFlow to create a “paint by numbers” kit to drive offshore development.

As I sat and thought about it, I was reminded of a question someone once asked me about breaking down requirements which led to the question “why gherkin?” At the time, I am sure I came up with a clever tactical answer to the question, but now that I think about it more, the question is both tactical and strategic in nature.

To understand this, let’s look at specifics on how I break down requirements with me teams, when I am given some leeway on how the requirements get broken down.

Setting Up the Problem

I am going to start with the assumption that the requirements documents suck. I know this is not always the case, but I find it more likely than not that the requirements are insufficient to get the job done. This has led many company managers to the belief that there is something inherently wrong with offshoring, but the real problem is not so much where the work is being done, but the definition. Let me rathole for a second to explain this.

Company A sends work offshore and it comes back with less than stellar results. When the same work is sent inside the building, the results are much better. So, there is an assumption that it works onshore but not offshore.

But I will contend it IS NOT working onshore either. Things are still getting broken, but the feedback loop is generally much shorter as the business owner can walk over to the development pit and say “what were you thinking?” All of these trips are forgotten when comparing offshore to onshore. In addition, the employees have greater domain knowledge than the offshore team, which reduces the problem domain.

Let’s take this a step farther and compare onshore contracting to offshore. We now have less domain knowledge than employees, unless we are paying top dollar. We still have a short feedback loop, however, so this seems superior.

ASIDE: I have built and led teams in various countries and each has its challenges, As India oft ends up the whipping boy, let’s look at India. In Indian culture, there is a push to get to a higher level title. For this reason, you rarely see very senior resources. The bulk of any team will be somewhere between Freshers (fresh out of college, less than 1 year experience) to Senior Developer (approximately 5 years, maybe 7), with much of the team in the 1-3 years experience range. This is part of why the rates are so low, but it is a trade off. With lower levels of experience, you need more firm requirements.

The point here is the problem is not necessary offshore, it is just exacerbated. Let’s look at an example:

Requirement: At Elite Last Minute travel, a successful flight out is described as:

  1. Client is picked up at home in a limo
  2. Client is delivered to airport and given all pertinent documents by the travel consultant
  3. Client’s luggage is checked into his flight
  4. Client is escorted to the plane
  5. Client is flown to destination
  6. Client is met at destination by a limo
  7. Client is driven to hotel and checked in

Pretty straightforward, right? But what if the client wants to see the Lincoln Memorial (Washington DC) and is flown to Miami, Florida and checked into a hotel there. By the requirements, this would constitute a successful flight out.

This example is a bit on the absurd side, as it seems any idiot should know that the destination is part of the equation for success. But consider this: Once we gain tribal knowledge in a domain, we start to assume it is self-evident, as well. Unfortunately, it is not. Add culture changes into the mix and you might find the assumption leads to disaster.

Breaking down Requirements – Defining Done

The first step we have to go through is breaking down requirements to make sure done is properly defined. Let’s start with a simple requirement:

3.1 Multiplication
All multiplication in the system must return the correct answer

In an Agile environment, this is generally started by stating each requirement in terms of the user:

As an math idiot, I would like to have multiplication done for me, so I can look like a genius

Neither of these defines done, however, without the assumption the coder fully understands multiplication. To fully define done, we need to look at what some of the rules of multiplication are, Let’s say we start with the following:

  1. Multiplying two numbers is equivalent of the sum of the first number added to itself the number of times represented by the second number (okay, my ENglish sucks here, as this is on the fly)
    Example: 5 * 5 = 5 + 5 + 5 + 5 + 5 (there are five 5s in the additional side of the equation)
    Example 2: 2 *2 = 2 + 2 (there are two 2s in the additional side of the equation)
    Example: 5 * 2 = 5 + 5 (there are two 5s in the addition side of the equation)
    Example: 2 * 5 = 2 + 2 + 2 + 2 + 2 (there are five 2s in the addition side of the
  2. Multiplying any number times 0 results in zero (makes sense as the addition side would have zero {number value}s)
  3. Multiplying any number times 1 results in the number (also makes sense, as there is only one of the number in the “loop”)
  4. Multiplying any number other than 1 or 0 times the maximum value of an integer results in an error

This is not the complete rule set, but we can now break the problem down by these rules. I find the easiest way is to set up an Excel spreadsheet with inputs and outputs. For the above, I would use something like this:

Input A Input B Output
5 5 25
2 2 4
5 2 10
2 5 10
5 1 5
2 1 2
1 5 5
1 2 2
5 0 0
2 0 0
0 5 0
0 2 0
0 MAX 0
MAX 0 0
1 MAX MAX
MAX 1 MAX
2 MAX ERROR
5 MAX ERROR
MAX 2 ERROR
MAX 5 ERROR

Done is now much better defined. If we take our previous example, we can break down the additional clarification too). NOTE: The actual inputs and outputs are more complex and then get separated out based on the unit being tested.

Input (Desired Attraction) Output (Destination City)
Lincoln Memorial Washington DC
Disney World Orlando
Grand Canyon Flagstaff

If you have not caught this, we are creating acceptance criteria. It is one way of “defining done”.

But What About Gherkin

Gherkin is a language that helps us use the acceptance above. If each line represents a single criteria in the acceptance matrix (the tables above), we might end up with something like:

Given that I chose the Grand Canyon as my desired location
When I fly to a destination
Then I will arrive in Flagstaff

So why is this important? For the same reason that user stories are important. It is a form of ubiquitous language that can be shared between business and IT to ensure everyone is on the same page. Provided we either make each of the lines into users stories and Gherkin statements (or code the acceptance table into Gherkin), we now have a definition of done.

Gherkin adds another value to me, when I am using Spec Flow. I can use the Gherkin statements to produce test stubs that I can send offshore. I call this a paint by numbers kit, as I can open them up in the morning and make sure the right colors were painted in the right spots (ie, they filled the assumptions in the given method, the action in the when method and the test result in the then method(s)).

Summary

This is just a brief intro into quality, as subject I am going to explore in detail as the year goes on. And while this may not express it clearly, as it started with an ADD (or ADHD) moment, the important takeaways are these:

  • Business and IT need to be in alignment with language. Here I am using user stories and Gherkin as the ubiquitous (shared) language, but you can have others. Domain Driven Design, which I will focus on later this year, also deals with the ubiquitous language concept, although it is more concerned with modeling the domain than defining done.
  • Most offshoring problems are a combination of expectations (a different understanding of what junior and senior developer means) and incomplete requirements. Fortunately, when we are in the same office, we can walk over and talk and give immediate feedback (not true in offshore engagements)
  • User stories and Gherkin can be used to bridge the gap from improperly defined requirements to a proper understanding of what done looks like (not true in ALL cases, but it is a good start)

Peace and Grace,
Greg

Twitter: @gbworld

Big Thrill Rides


I saw a picture posted on Facebook about the X Scream on the Stratosphere in Las Vegas. Here is the picture:

This is a ride called insanity on top of the Stratosphere in Vegas. The tower is 1149 feet tall, with the deck up around 850 to 900 feet (from Wikipedia heights of the thrill rides.

Stratosphere Thrill Rides (Strip, Vegas)

There are 4 rides on the Stratosphere: Big Shot, Insanity, Sky Jump and X Scream.

Big Shot

The Big Shot fires you up at high speed from the top of the Stratosphere tower (at 1,081 feet, the highest thrill ride in the world). You can see this in the POV video below:

Insanity

Insanity hangs you over the edge of the tower. This is the one in the original picture. At 900 feet, it is the second highest thrill ride in the world. Here is a video:

Here is a POV shot of the ride:

Sky Jump

The Sky Jump mimics skydiving from the tower. It rolls you out and then feels like it is dropping you. At 855 feet, it is lower than the rest and Wikipedia does not have where it is on the list of highest thrill rides in the world. Here is a video from a wrist cam.

And here is one at night.

X Scream

X Scream drops you off the side of the tower. It is the third highest thrill ride in the world. Here is a video that shows the ride from the side:

Six Japanese tourists got stuck on the ride during a power failure in 2005.

Old Ride: High Roller (GONE)

This ride no longer exists, but it was the first thrill ride the Stratosphere had. It was at 9809 feet and the highest roller coaster in the world.

Other Thrill Rides

While these are not necessarily sitting on top of some tower somewhere, they are considered the best thrill rides in the world.

X2: Six Flags, Magic Mountain, Valencia, California

This is the first “4D” roller coaster. The ride has spinning seats to change the angle of the ride, so you can be moving forward but facing backwards, and vice versa.

SkyScreamer: Six Flags Over Texas, Arlington, Texas

A 400 foot tower swing.

Eejanaika, Fuji-Q Highland, Fujiyoshida, Japan

Another “4D” roller coaster, with a longer track. You can see the spinning seats in the off ride part of the video. (Turn down the volume on POV if hearing the videographer screaming annoys you):

 

Kingda Ka, Six Flags Great Adventure, Jackson, New Jersey

Tallest Ground Based Roller Coaster at 456 feet. Shoots you up at 128 MPH and then back down.

Formula Rossa, Ferrari World, United Arab Emerates

This Ferrari styled coaster is the fastest in the world at 149 MPH. Here is a POV, but it does not seem all that fast on the video, as the ride does not have as many heavy drops. You see the speed a bit better on the later non-POV section.

The Joker, Six Flags México, Ciudad de México, D.F., Mexico

The park, in the southwest part of Mexico City, has a variety of rides. But the spinning Joker coaster is one of the favorites.

Hope you enjoyed this.

Peace and Grace,
Greg

Twitter: @gbworld

DRM in Consumer Products? Bad Idea for Consumers


I just saw today where Green Mountain decided to include DRM in its next line of Keurig single cup brewers. I am sure they are going to sell it as a protection for consumers against knock-off cups, but the reality is this is a move to protect the company from losing part of the licensing money stream and not a protective measure for consumers.  And they are not the only ones.

Keurig


Image from SlashGear, where I saw the announcement.

Here is how I envision this working. Each K-cup will have a cheap chip, like an RFID with an encrypted code on it. More than likely, to make sure future licensees cups work in all single cup makers, the encrypted key code will be numeric when the encrypted value is correct and the brewer will refuse to brew one that fits either category below:

  1. No chip
  2. Chip that does not decrypt to correct types of values

The brilliance of this, from a business standpoint, is anyone who breaks the encryption to use their machine with non-licensed cups is guilty of committing a crime under different DRM laws (brewing a cup of unlicensed coffee may even mean you are guilty of a felony under some versions of DRM laws). And, anyone who breaks the encryption to make unlicensed cups work is guilty of a DRM violation allowing the government to shut them down.

In short: You will only drink more expensive licensed coffee.

With the prevalence of other single cup brewers on the market, I hope this one causes enough consumer backlash to get them to turn around on this bad idea. It is not in the best interest of consumers to have a machine that only allows coffee cups that benefit Green Mountain in some way. You, as a consumer, should have the choice to brew what you want. And if you pick “inferior” coffee, so be it.

I own a Keurig. I only buy licensed cups, or I use the licensed basket ($19.99 ouch!) to brew the ground coffee I want to brew. But if my brewer breaks down and I have a choice of a DRMed Keurig 2.0 or a competitor, I am going to go for the competitor. End of story.

Renault

Renault has taken this even further with their electronic car Zoe.

From Boing Boing article “Renault creates a brickable car”.

If you buy a new Zoe, you can only rent the battery. If you miss a payment, they can make it so your vehicle is a brick until you pay. The problem is hackers could potentially get into this system and cause serious problems, as well. I am not sure if the battery is DRMed to the point you cannot buy an unlicensed competitor’s battery (most likely there is a patent on the system now that protects them so it is unnecessary), but Renault, like Keurig, has created a product that protects their revenue stream.

DRM: How it works

How do these types of products protect the company? Certainly the consumer can do what he wants with the merchandise he pays for, right?

Yes … and no. Under SRM laws, if protections are added to a system, like encryption, breaking the encryption scheme is against the law. As an example, copyright fair use laws allow you to make personal copies of copyrighted material you purchase. For example, you can photocopy a book you own, or make copies of your CD collection.

But, while you can legally copy things like DVDs under copyright law, DRM law makes it illegal to break copy protection schemes to make copies. Any software created in the US that breaks copy protection schemes is illegal. In the late 90s, entertainment companies came up with a scheme called CSS to protect DVDs. In 1999, however, a program called DeCSS was created to decrypt the copy protection on DVDs. The stated purpose of DeCSS was to get DVDs to play on Linux machines, but it was also used by pirates to decrypt and make pirated copies.

Under DRM laws in various countries, coding a program that breaks encryption on materials like DVDs is illegal, so the one of the programmers, Jon Lech Johansen, was arrested in Norway for helping create the program. The scary thing here is this type of law has been used to stop a great many software advances by making a practice that can be used for very beneficial purposes illegal.

If this were the extent of where DRM has gone, it would be scary enough. But the laws go farther. If you use a product to copy a DVD, even for your own collection (copy a kid’s DVD for example, so they do not destroy the original), you have broken the law. And, under some DRM laws you can even be arrested and charged with a felony. Imagine that, it is just as serious to make a copy of a kid’s movie as shooting someone and killing them.

Unintended Consequences of Government?

The stated purpose of DRM laws was to protect artists and other copyright holders from pirates. But they ended up protecting media giants more than the actual artists, who continue to get a much smaller portion of the profits than the giants.

Despite the intent, if you place the proper code in any device, you can protect it from being reverse engineered or “decrypted” (actual decryption or otherwise) under a great many DRM laws.

What this means is you now may buy a product but leave control over how it is used in the hands of the manufacturer.

There Should Be a Law, Right?

I am not sure one bad law to combat a worse law is the proper reaction. Instead, I think people should inform there friends and blog readers, etc, about the potential dangers of DRM in products and get more people to vote with their wallets.

In the case of Zoe, the sales of the car were about 10,000 in 2013, around 1/5th of their target of 50,000 in sales. I am not sure if the DRM battery caused this, was a contributing factor, or merely something people like me are concerned about. If DRM contributed to the slow sales then I think the market has spoken saying “you don’t own the car, I do”.

In the case of Green Mountain, I am not sure what will happen. The market may be wowed by the newer features and accept the extra payment to Green Mountain for every cup of coffee, simply for brewing it in a Green Mountain Keurig machine. In other words, the inconvenience of only being able to brew more expensive, licensed coffee may be secondary to brewing a larger cup (or other planned enhancements). If I were a competitor, I would use this as an opportunity to gain market share, as I am sure some people will be appalled by the anti-competitive measure and spank Green Mountain for taking control of what they can brew.

Summary

I think DRM is necessary in some instances, like protecting internal documents that are private property of a company. When it moves out into the public, and restricts consumer choice to create additional profits for corporations, I am not in support of the idea. I do, however, think the market should decide if the intrusion is warranted. If any law is passed it should be one to inform the consumer the DRM exists in the product.

Peace and Grace,
Greg

Twitter: @gbworld

Troubleshooting: The Lost Skill?


This blog entry comes from an email I received this morning asking me to “check in files”, as my failure to do so was causing a work stoppage. After a very short examination, I found the following:

  1. I had no files checked out (absolutely none)
  2. The problem was a permissions problem, not a check out problem, or the person who could not check in their files was not being stopped by my failure to act, but by someone else’s incorrect granting of permissions
  3. I had no permissions to solve the problem (i.e. grant the correct permissions

Further investigation of the problem would have revealed it was a permissions issue. In this case, the only consequence is another day lost of productivity, and a wonderful opportunity to learn. In some cases, the consequences are more dire. Consider, for example, Jack Welsh, the CEO of GE.

Jack made an assumption and ended up destroying a manufacturing plant. In one telling of Jack Welsh’s story, the dialog goes something like this:

Jack: Aren’t you going to fire me now?
Manager: Fire you? I just spent 2 million dollars training you.

Considering Jack Welch is now one of the most successful executives of all time, it is good his manager was able to troubleshoot the aftermath to a problem Jack had worked through on assumption. The point is plain: When we don’t troubleshoot a problem, we go on assumptions. In the email I received this morning, there was an assumption I had files checked out. Rather than test the assumption, work stopped.

Tony Robbins tells a story in his Personal Power program about a suit of armor. As he is walking on stage, every time he moves close to a suit of armor there is feedback. The audience eventually starts screaming at him “it’s the armor”. But he continues to stand near the armor and the feedback eventually goes away. He then moves away and the feedback comes back. Turns out there was a fire on a very close frequency and the messages were interfering with the microphone.

Personally, I think the above story is a myth, as I know the FCC is very careful on doling out bands and it is unlikely a microphone has the same band as emergency services. But this is also an assumption, and proper troubleshooting would have me examining the issue.

The path of least resistance

On Facebook … nay, on the Internet as a whole, a large majority of items are written out of assumptions or biases, and not an examination of the facts. For most people, whether you agree with Obamacare or not is not an exercise in examining the facts completely and then drawing conclusions. Instead, a quick sniff test is done to determine if you feel something smells, and then action is taken.

Let’s take an example. In 2006, the news media reported on the Duke Lacrosse team raping an African American stripper. The case seemed open and shut, as the evidence piled up. Duke University suspended the team and when the lacrosse coach refused to resign, Duke’s president cancelled the rest of the season. The case seemed so open and shut, Nancy Grace (CNN) was suggesting the team should be castrated.

When the assumptions were removed, a completely different story was told. Not only was the evidence thin, much of it was manufactured. The District Attorney, Ray Nifong, was disbarred and thrown in jail for contempt of court.

We can also look at the George Zimmerman case, where the initial wave of “evidence” painted another “open and shut” case. But the “open and shut” case, based on assumptions, began to crumble when it was discovered ABC edited the 911 tape to paint Zimmerman as a racist and carefully choose the video and picture evidence to paint a picture of a man that had no wounds and was the obvious aggressor.

The point here is not to rehash these cases, but to point out that assumptions can lead to incorrect conclusions. Some of these assumptions may lead to dire consequences, while most just result in a less than optimal solution.

Now to the title of the section: The path of least resistance.

When we look at the natural world, things take the path of least resistance. Water tends to travel downhill, eroding the softest soil. Plants find the most optimal path to the sunlight, even if it makes them crooked. Buffalos would rather roar at each other to establish dominance than fight, as the fighting takes precious energy. And humans look for the least amount of effort to produce a result.

Let’s pop back to Obamacare, or the PPACA (Patient Protection and Affordable Care Act), as it illustrates this point. Pretty much everyone I encounter has an opinion on the subject. In fact, you probably have an opinion. But is the opinion based on assumption? You might be inclined to say no, but have you actually read the bill? If not, then you are working on distillations of the bill, most likely filter through the sites you like to visit on a regular basis. And, more than likely, you have chosen these sites as they tend to fit your own biases.

I am not deriding you on this choice. I only want you to realize this choice is based more on assumptions than troubleshooting. Troubleshooting takes some effort. In most cases, not as much as reading a 900+ page bill (boring) or many more thousands of pages of DHS rules (even more boring). But, by not doing this, your opinion is likely based on incomplete, and perhaps improper, facts.

Answering Questions

I see questions all the time. Inside our organization, I see questions for the Microsoft Center of Excellence (or MSCOE). I have also spent years answering online questions in forums. The general path is:

  1. Person encounters problem
  2. Person assumes solution
  3. Person asks, on the MSCOE list, to help with the assumed solution – In general, the question is “How do I wash a walrus” type of question rather than one with proper background of the actual business problem and any steps (including code) taken to attempt to solve it
  4. Respondent answers how to solve the problem, based on their own assumptions, rather than using troubleshooting skills and asking questions to ensure they understand the problem
  5. Assumed: Person implements solution – While the solution may be inferior, this is also “path of least resistance” and safe. If the solution fails, they have the “expert” to blame for the problem (job security?). If it succeeds, they appear to have the proper troubleshooting skills. And very little effort expended.

What is interesting is how many times I have found the answer to be wrong when the actual business problem is examined. Here are some observations.

  • The original poster, not taking time to troubleshoot, makes an assumption on the solution (path of least resistance)
  • Respondent, taking path of least resistance, answers the question with links to people solving the problem posted
  • If the original poster had used troubleshooting skills, rather than assumptions, he would have thrown out other possibilities, included all relevant information to help others help him troubleshoot, and would have expressed the actual business problem
  • If the respondent had use troubleshooting skills, rather than assumptions (primarily the assumption the poster had used troubleshooting skills), he would have asked questions before giving answers.

To illustrate this, I once saw a post similar to the following on a Microsoft forum (meaning this is paraphrased from memory).

Can anybody help me. We have a site that has been working for years in IIS 4. We recently upgraded to Windows Server 2008 and the site is no longer serving up asp.net files located at C:\MyFiles. I just hate Microsoft right now, as I am sure it is something they changed in windows. I need to get this site up today, and f-ing Microsoft wants to charge for technical support.

The first answers dealt with how to solve the problem by turning off the feature in IIS that stops the web server from serving files outside of the web directory structure. While this was a solution, troubleshooting the problem would have shown it was a bad solution.

Imagine the user had written this instead.

We recently upgraded to Windows Server 2008 and the site is no longer serving up asp.net files located at C:\Windows\System32.

Turn off the feature in IIS would have still solve the problem, but there is now an open path directly to the system for hackers. And, if this is the way the person implements the solution, there is likely some other problems in the code base that will allow the exploit.

The proper troubleshooting would have been to first determine why ASP.NET files were being served from C:\MyFiles instead of IIS directories. Long story, as the reason had to do with an assumption developing on a developer box generally, perhaps always, led to development of sites that did not work in production. So every developer was working on a production server directly. The C:\MyFiles was created from an improper assumption about security, that is was more secure to have developers working from a share than an IIS directory. This led to kludges to make the files work, which failed once the site was moved to a server with a version of IIS that stopped file and folder traversing. This was done as a security provision, as hackers had learned to put in a URL like:

http://mysite.com/../../../Windows/System32/cmd.exe%20;

Or similar. I don’t have the actual syntax above, but it was similar to above and it worked. So IIS stopped you from using files outside of IIS folders. Problem solved.

Now, there are multiple “solutions” to the posters problem:

  • Turn off the IIS feature and allow traversing of directories. This makes the site work again, but also leaves a security hole.
  • Go into IIS and add the folder C:\MyFiles folder as a virtual folder. This is a better short term solution than the one above. I say short term, as there is some administrative overhead to this solution that is not needed in this particular case.
  • Educate the organization on the proper way to set up development. This is not the path of least resistance, but a necessary step to get the organization on the right path. This will more than likely involve solving the original problem that created the string of kludges that ended with a post blaming Microsoft for bringing a site down.

Troubleshooting The Original Problem

I am going to use the original “Check in your files” problem to illustrate troubleshooting. The formula is general enough you can tailor it to your use, but I am using specifics.

First, create a hypothesis.

I cannot check in the files, so I hypothesize Greg has them checked out.

Next, try to disprove the hypothesis. This is done by attempting to find checked out files. In this case, the hypothesis would have easily been destroyed by examining the files and find out none were checked out.

Graphic1

So the next step would be to set up another hypothesis. But let’s assume we found this file as “checked out”. The next step is to look at the person who has the file checked out to ensure the problem is “Greg has the file checked out” and not “someone has the file checked out”.

Graphic2

Since the name Greg Beamer is not here, even if the file were checked out, he cannot solve the problem.

Next, even if you have a possible solution, make sure you eliminate other potential issues. In this case, let’s assume only some of the files were checked out when examined, but the user was still having problems uploading. What else can cause this issue.

Here is what I did.

  1. Assume I do have things checked out first, as it is a possible reason for the problem. When that failed, look at the user’s permissions on the files in question. I found this:
    image
  2. Hypothesis: User does not have proper permissions. Attempted solution: Grant permissions
  3. Found out permissions were inherited, so it was not a good idea to grant at the page level. Move up to the site level required opening in SharePoint online, where I find the same permissions.
    image
  4. Now, my inclination is to grant permissions myself, but I noticed something else.
    image

    which leads to this
    ecGraphic4

    which further led to this (looking at Site Collection Images):
    image

The permissions here are completely different. The user is not restricted, so he can access these.

I did try to give permissions to solve the issue:
Graphic3

But end up with incomplete results:

image

I further got rid of the special permissions on some folders, as they were not needed. More than likely added to give the developer rights to those folders. I still have the above, however, which means someone more skilled needs to solve the problem.

The point here is numerous issues were found, none of which were the original hypothesis, which was reached via an assumption. The assumption was

I cannot check in, therefore I assume someone has the files checked out. Since Greg is the only other person I know working on the files, I assume he has them checked out.

Both assumptions were incorrect. But that is not the main point. The main point is even if they were correct, are there any other issues. As illustrated, there were numerous issues that needed to be solved.

Summary

Troubleshooting is a scientific endeavor. Like any experiment, you have to state the problem first. If you don’t understand a problem, you can’t solve it..

You then have to form a hypothesis. If it fails, you have to do over, perhaps even redefining the problem. You do this until you find a hypothesis that works.

After you solve the problem, you should look at other causes. Why? Because you either a) may not have the best solution and b) you may still have other issues. This is a step that is missed more often than not, especially by junior IT staff.

Let me end with a story on the importance of troubleshooting:

Almost everyone I know that has the CCIE certification took two tries to get it. If you don’t know what CCIE is, it is the Cisco Certified Internetwork Engineer certification. It is considered one of the most coveted certifications and one of the hardest to attain. The reason is you have to troubleshoot rather than simply solve the problem.

The certification is in two parts. A written exam, which most people pass the first time, and a practical exercise, which most fail. The practical exercise takes place over a day and has two parts:

  1. You have to set up a network in a lab according to specifications given at the beginning of the day.
  2. After lunch, you come back to find something not working and have to troubleshoot the problem

Many of the people I know that failed the first time solved the problem and got the network working.So why did they fail? They went on assumptions based on problems they had solved in the past rather than worked through a checklist of troubleshooting steps. Talking to one of my CCIE friends, he explained it this way (paraphrased, of course):

When you simply solve the problem, you may get things working, but you may also end up taking shortcuts that cause other problems down the line. Sometimes these further problems are more expensive than the original problem, especially if they deal with security.

Sound similar to a problem described in this blog entry? Just turn off IIS directory traversing and the site works. Both the poster and the hacker say thanks.

Please note there are times when the short solution is required, even if it is not the best. There are time constraints that dictate a less than optimal approach. Realize, however, this is technical debt, which will eventually have to be paid. When you do not take the time to troubleshoot, and run on assumption, build in time to go back through the problem later, when the crisis is over. That, of course, is a topic for another day.

Peace and Grace,
Greg

Twitter: @gbworld

Obamacare: Rising Prices for Subpar Insurance?


I have avoided writing a blog entry on this subject lately, despite all of the news, both positive and negative. I have avoided it because of the mass ignorance on both sides. Talking to my Conservative friends, I hear horror stories of premiums rising 300%. Talking to my Liberal friends, I hear how the rising prices are due to getting off subpar insurance and how much better off people are.

My opting in on writing something came from an article posted by James Shore (@jamesshore on Twitter) that essentially blames the hysteria over rising prices on the Tea Party. There is no more balance in the newer article (blaming the Tea Party) than the original only stating what the letter stated and not examining other options. The question is which is closer to the truth:

  1. People are paying more for insurance
  2. People are getting better insurance

In reality, it is a bit of both.

Equivalent Insurance for the same/higher/lower price?

The first thing you have to understand about the ACA is there are no equivalent plans. When I see someone stating “people are paying less for equivalent insurance” or “people are paying more for equivalent insurance” I say “bullshit”. There are no equivalent insurance plans.

This is due to the new standards for policies. Example? Find a plan in 2009-2013 that had a deductible or out of pocket max of $6350. You can’t. Why? Because the insurance  companies uses rounded numbers. You can find a plan with a $1000 out of pocket max. You can also find $2500, $5000, $7500 or even $10,000 or more. But you will not find one ending with $350. There was no good reason to do it.

I am personally under the view the government purposefully chose off numbers for minimum standards so you could not compare. It is not in any party’s best interest to be able to compare, and politicians are human. Human beings like to work out the rules to benefit themselves. So this is not a big conspiracy, just an “it is what it is”

The end result is it is hard to compare like plans. Now, part of this is the insurance companies. They may have had a $6000 out of pocket max plan that is now $6350. So the plan is actually worse than before and if the price went up, it is true you are getting less for more.

That leads us to the topic of subpar insurance.

Subpar Insurance?

Here is an nice little clip of Nancy Pelosi stating essentially “the prices are not going up; people are, instead, getting proper insurance”. In other words, the government came in and saved us from “subpar”. You can see the mantra in the video below.

And, Pelosi is correct … according to the government. According to the government, having a policy with higher than a $6350 out of pocket max or deductible is bad. This was not the ACA directly, but the HHS, who got to determine the minimum standards of care. Do you agree with the minimum standards? Just understand, it is not an individual choice what minimum coverage looks like any more. And, from that perspective (government should decide what is par), Nancy Pelosi is right that everyone whose coverage is cancelled had “subpar” insurance. If a $6350 out of pocket max or deductible are subpar, then high deductible plans are all subpar.

If we wanted to lower healthcare costs, a stated goal of the ACA, we should have more people on high deductible plans, paying their day to day care. This would foster more competition. The government’s role should have been to ensure pricing was transparent to level the field, not ensure everyone an insurance card. Take care of those who cannot afford care? Fine. But translate that to insure everyone? This goes contrary to the stated objectives of the law that was passed.

So, high deductible plans are bad insurance for individuals buying  their own insurance, according to the government. But who is the government to decide what is acceptable for individuals? What if the government told you the minimum acceptable size of a television was 44”. You can no longer buy a $100 7” television, as it is subpar. You now have to spend $400 to get acceptable minimum television. But the law also states you have to have a minimum of 90 Hz refresh rate and 1000p. Now it is $500.

And since there are currently no 44” televisions, you have to get a 45”. Since there is no 90 Hz refresh rate, you have to get 120 Hz and there is no 1000p,so you need to get 1080p. Now, the television manufacturers turn around and create 44”,1000p, 90 Hz television and call them bronze televisions.

The Conservatives bitch that the price went up for equivalent televisions and the Liberals bitch the televisions were subpar. But if people were happy with a 7” television, who is the government to state they can’t have it?

I realize healthcare and televisions are completely different in execution, but the concept is similar. And it is more the individual that determines what is good for them. If the ACA had created bronze through platinum plans, but also allowed individuals to determine if some plan fit the, even if it did not fit the minimums,then the bitching would be less justified, as there would be a choice for those who felt they should be able to decide what works for them. But that would not have been financially viable.

Some Americans truly did have insurance that was subpar. Ignoring this fact does not allow for an honest debate. They were given few choices due to their health or wealth, or lack thereof, and the new law has given them better choices. But the choices are more expensive and some Americans did not need the new “par” standards, or want them.

On the other side, others had good or even great insurance, but it missed one or more items of what is now considered par. When I examined 2013 plans, I found some insurance plans that were actually better for some segments of society that are now subpar. For example, under the ACA exchange plans, you can get a plan that has $65 copays for the first three visits, with no copay once you meet deductible. But if you are relatively healthy, the plan that had 20% coinsurance and no copay might have been less expensive. In fact, with standard doctor’s visit costs, it would have been cheaper than the new ACA plan for most Americans. Not allowed anymore.

But weren’t some plans grandfathered in? Sure, but very, very few. Why? The HHS set rules that substantive changes voided the grandfathered plans and then called a rise in premium substantive at a very low threshold. Since the cost of healthcare went up 15% in 2010, largely due to the new fees charged the medical manufacturers, health insurers and big pharma. Back to this shortly.

The points that are important are

  1. The HHS rules were adopted after the law passed, and voided insurance plans that might have been fine if the standards were decided differently. These are the plans we can firmly blame on the ACA.
  2. The minimum standards included standards in pricing and healthcare, so we are not just talking minimums for healthcare.
  3. The minimum standards are an “all or nothing” proposition. It does not matter if you have better insurance on all points but 1, you still have a cancelled policy.
  4. The government is deciding for you what minimum coverage looks like. This may be fine with you and may not.

Rising Prices

The prices are rising, at least in most states. Since you cannot compare ACA plans to 2013 plans directly, as , it is difficult, if not nearly impossible, to determine exactly how much. Are they rising 2-3 times the amount? Yes, in some cases keeping the same insurance with change provisions is 2-3 times the amount. But, to be fair, people can find less expensive options on the exchanges.

But the promise of Obamacare does not really come in unless you make somewhere between 138% and 400%of the poverty rate, as you get all, or a portion, of your premiums paid for you. And if you are at the lower end up the range, you might get your deductible and out of pocket max lowered for you, as well. In fact, if you are low enough, you can get plans with no deductibles and an extremely low Out of Pocket max.

The sad part here is 400% of the poverty level is well into the middle class, making the middle class a new entitlement class. I don’t think most of the middle class wants to be on programs like welfare, but they are now on the dole if they get the subsidies. But the exchanges are set up to make this less obvious, as you never see the full price of the insurance and may not even see the subsidy amount.

NOTE: Connecticut’s site shows “you are eligible for a subsidy up to $X” but does not show the subsidy amount on the page where you pick policies. I use this site, as you could shop, from the beginning, without putting in any of your information until you checked out.

The media currently heralds 2013, through October, as a banner year for healthcare, with the lowest rise in the cost of healthcare in decades. They also state 2012 was on track with other years, down from 2010 and 2011. This is stated to be proof the ACA works. But does it? In 2010 premium prices went up in the double digits and in 2011 it was up almost 10%, the highest rises in decades. Is 2013 proof the ACA works, or an overly pessimistic market responding. When the new ACA fees went into effect, perhaps the insurance industry panicked?

I predicted the fees would drive up the cost of healthcare when I read the bill(unlike some of our Congress Critters?). The fees, in the billions, had to have an impact. I now see a big rise in 2014, as well. Why? Because we have pushed so much of the law into 2015.

Here is something to consider. Insurers set 2014 prices in 2013, with the idea 7 million would sign up on the rolls by the end of March. But the provision they sign up or be fined has been pushed to 2015. And the provision the insurance companies must fit the framework is also pushed to 2015. The insurance industry has a lot of leeway in 2014 and you may even see mid-year hikes in premiums, ala this year.

Better Insurance?

We then come to the question of whether moving up to par gives you better insurance. If you truly had subpar insurance, as determined by you, not the government, then you may have a better written policy. But it may not truly be a better policy.

One of the unintended consequences of this law was the lengths both the medical providers and insurance companies would go to make money. Many medical providers have opted out of ACA exchange plans, deciding to only take employer plans (and other group plans) at this time. They are willing to forgo less than 5% of the populace as their plans pay less for medical services.

What this means to exchange plan holders is they may not be able to see their doctor, as he is not taking the plans. This does not sound too bad until you look at places where the best hospital for their type of care has opted out of exchange plans. Or worse, as in the case of Seattle Children’s Hospital, which was dropped from exchange plans, they have to travel hundreds of miles to get cancer care for their children, or choose care in an adult hospital, which is not skilled in pediatric cancer, thus lessening care. This is not critical in all areas, of course.

Since I mentioned Seattle Children’s Hospital, I should note that not all limitations in choice are being made by the provider choosing non-exchange plans. In the case of Seattle Children’s Hospital it was the insurance company. This means some of us may get our care lessened even if we are still covered by group plans, as a  reaction to the exchange plans.

Either way, the policy is better on paper, but not necessarily better in the real world. And tightening networks is not the only way the insurance might have gotten worse.

Under the ACA the playing field is leveled quite a bit. On the positive side, it means those unable to get insurance now can get insurance. But the price has to go up for the healthy to pay for the sick adding to the rolls, sometimes significantly. There is a push to get young people on the rolls, as they are healthy and paying a large portion of the bill. In fact, a 40 year old will have better rates than a 26 year old. Don’t believe me? Run the same plan for two males, one 26 and one 40, both with zero dependents. Nice, huh?

Affordable Insurance?

The ACA has made insurance more affordable for some.Those, for example, who have their entire premiums paid and deductibles and out of pocket max lowered, have more affordable insurance. If they also have a pre-exsiting condition, they have a major win.

But since the average American, even with our horrible diet, does not fit these cases, are we not setting the rules for the exception rather than the rule?

I know a great many cancer moms (see note) that saw the ACA as a godsend. Many are now seeing it is going to bankrupt them more unless they lower their income significantly. Under the ACA, unless you have a good portion of your insurance paid, you are going to pay more per year in 2014 for your serious illness than in 2013. I am not stating this is true for all, just a large number of people. With the minimum standards, it is easy to see why.

NOTE: My youngest daughter is a 5, almost 6, year cancer survivor.

The Bottom Line

Some of the people that are getting notices truly had what all of us would agree was subpar insurance. But some did not. Some just had insurance that missed one of the provisions the government states makes insurance par. They  actually had better, more affordable insurance.

The picture is not all doom and gloom, but there were other ways to solve this. And here are a few facts that may shock some of you, as you may be hearing things to the contrary. First, the Republicans did have alternatives to parts of the plan. You may wish to debate on whether or not they were good alternatives, but to say there were none is patently false. It takes a bit of a Google search to find other voices, but they are there if you search hard enough. Second, the Republicans are not balking against the ACA now that it is being put into action, as not one voted for it. You can look at the rollcall and find this out (and please don’t comment back with the action in the house to suspend debate, which some Republicans voted for, as that was not a vote for the bill – in fact, the original house bill had NOTHING to do with healthcare – the Senate stripped 100% of the wording from the bill and started over, something that, by the spirit of the law, is illegal, but apparently not by the letter of the law).

The ACA was voted in as soon as the Democrats had a filibuster proof majority between them and Independents. This would not matter today as Reid changed the rules of the Senate so a filibuster can be broken by a simple majority. It was not a compromised bill; it was one that fit one parties view of what is good and bad, without debate from the contrary side.This does not mean the law is wrong, only that there is a greater chance at least some of the provisions are flawed, if not downright bad, as no ideology is right 100% of the time. There is both good and bad in the law, and, from my reading of the law and HHS provisions, a lot of it is bad, as it does not meet its objectives and serves to drive up prices.

The author of the article, Maggie Mahar, is almost assuredly right that the woman could have gotten cheaper insurance than the $1000 she was quoted on the cancellation notice option(s). But the case is anecdotal. What is factual, from a scientific standpoint, is insurance rates are going up rather significantly in most states for individual policy holders.  Not significantly, as in 300%, but at a rate much higher than rates went up prior to the ACA passing.

But Greg,you might say,I see plenty of examples where the rates are much lower. Do you? Or do you see the total amount lowered? The total amount, in case I have seen that are lowered, is after subsidies. If the American taxpayer is paying a large portion of your premiums,of course the total cost, for you, is lower. But the actual costs, which the rate is based on, is higher. There are only a couple of states where this is currently not true.

To put this in perspective, let me consider my daughters.Suppose one wanted a new toy that costs $200 and I decided to buy it for her, so her out of pocket costs were $0. Did the price go down from $200 to $0? No. The cost remained the same. Let’s say the toy went up in price to$250 and I still decided to pay all of it. In that case, the price did not even remain the same, but from her perspective, the cost was$0, despite a $250 price. Much of the talk I see on the Internet about lower premiums is not lower premiums at all. It is lower out of pocket costs for premiums due the American taxpayer footing part of the bill. What gets me is some people are paying more even with subsidies.

My point here is this:

  1. Naming the Tea Party boogie man is a red herring
  2. While stories out there may be exaggerated (300% increase in premiums), it does not mean higher than average increases are not happening. And, by looking at facts, you can easily find the rates in many states are going up far more than the average.
  3. Insurance under the ACA is not necessarily cheaper. Even if we have cheaper premiums and better policies, cost of deductibles and out of pocket maxes may bankrupt families. Add on shrinking networks and it could be worse.
  4. Anything is less costly to you when someone else is footing the bill.

Peace and Grace,
Greg

Twitter: @gbworld

Using Git with Visual Studio ALM: Why Visual Studio ALM?


In this blog entry, I want to give reasoning for using Visual Studio ALM 2013. The “case study” here is based on a client that uses Visual Studio and Git, but does not utilize any unified system for Application Lifecycle Management (or ALM). This is the intro for a series of video posts on how to do this.

This particular post uses the words “Visual Studio ALM” as a high level concept. As I drill down in later entries, you will start to see different products and parts of products coming out of this larger concept. The point here is using “Visual Studio ALM” is much like using analogies. It is a good way to start the conversation to get the idea nailed down, but breaks down eventually. As a takeaway, understand that much of what “Visual Studio ALM” will mean, in the context of this series, is “Team Foundation Server”.

ABC123 Company

The series here is going to focus on a fictional company, ABC123, which produces childhood content for websites. The company has a website, some web services and offers content services, as well as a child health advice line, for a wide variety of partners.

The development team uses Git for their source repository, a decision made both due to familiarity of one of the senior developers and the open source nature of the repository. They have adopted OnTime for time tracking and a consulting company helped them utilize the tool for Agile project management. They also use Trello in some groups to facilitate a more Kanban type of approach to Agile project management, but this team still enters time into OnTime. JetBrain’s TeamCity has been picked up for build management, which is set up in a continuous integration model. There is no continuous delivery at the time, but the concept is being considered for adoption.

The Project Management Office (PMO) handles both portfolio management and tasks in Excel initially and the tasks are then placed into OnTime and/or Trello for the development team to work on. The PMO also uses Microsoft Project to track projects.

Due to the use of a variety of programs, most which are not commonly used in the marketplace, the learning curve for new developers is rather high.

Recommendations

Here are some recommendations given to ABC123.

  1. Move from a server farm to a virtual environment utilizing multiple VMs.
  2. Implement Visual Studio ALM
    Some specific pieces of “Visual Studio ALM” I would like to implement are:
  • TFS Integration with Excel and Project (Long Term Project Server is a possibility)
  • Lab Management
  • Work Item Tracking
  • Agile Reports
  • GIT Integration

Recommendation 1: Use a Virtual Environment

This first recommendation is not part of the series, but I am including it as virtualization is mandatory on some levels. Lab management requires virtualization to work, so you have to work with virtual machines on this level at the very least. To create labs that more closely mimic production, virtualization is advised in the production environment.

I am under the belief we will eventually move more and more of our stuff to the cloud, either private or public. Technically, the cloud is possible on physical machines, but you find it more commonly done virtually, as it allows you to more quickly expand your cloud, or clouds. This is a topic far beyond the scope of this entry, but the takeaway is you are likely to virtualize your environment(s) at some time, and biting it off prior to cloud implementation will allow smaller steps over some type of “big bang”.

Enough said on this one.

Recommendation: Adopt Visual Studio ALM

This recommendation is for a variety of reasons, but most of the reasons fall into three buckets.

  1. Consolidation of toolsets to make it easier to accurately track projects
  2. Integration with common Microsoft products, including Visual Studio, which is currently used for development in the company.
  3. Open new scenarios

The first reason is focused both on a) flattening the learning curve for new developers, saving the company money and b) allowing additional scenarios that are difficult under the current toolset and environment.

The second reason is to enable the team to use the tools they currently use and eliminate the need to enter the same information into multiple tools.

The third reason focuses on the fact that the current reporting story is rather thin. There is limited reporting on productivity and few, if any, reports on different parts of the development process. There are manual processes for other types of reporting (for example, determining what items were checked in for a particular user story.

Features to Use in Visual Studio 2013 ALM

In future posts, I am going to illustrate many, if not all, of the following features The only one I am not sure I can illustrate, at least at this time, is lab management, but I will work to set up an environment once my powerhouse computer is returned from Dell support.

NOTE: These are the same features mentioned earlier

  • TFS Integration with Excel and Project (Long Term Project Server is a possibility)
  • Lab Management
  • Work Item Tracking
  • Agile Reports
  • GIT Integration
     
    Integration with Excel and Project

    The PMO should not have to change tools or enter work into multiple tools. Conversely, the developers should be able to see the work items without leaving Visual Studio. Visual Studio ALM allows project managers to enter in user stories and tasks into TFS repositories from tools like Excel and Project. The data can be pulled out at any time into these tools, altered, added to and checked back in so team members can start working on them.

    There are some scenarios that would benefit from the use of Project Server, but this is a longer term recommendation. Most of the functionality needed can be facilitated by using Project with Team Foundation Server 2013.

    Lab Management

    Lab management is useful to set up a test environment that can be reset easily when a new build is released. The ultimate goal is to facilitate continuous integration and continuous delivery scenarios. The diea here is to have the environment reset before a build is pushed and then run all tests in an environment that is similar the production environment. This does not have to be the testing environment, as it can be a developer test environment separate from environments like SIT (Staging Integration Testing).

    Work Item Tracking

    Under the current setup, ABC123 has a hard time tracking what has been changed in their code or relating code to particular user stories and tasks. While this may not seem very important, understanding the motivation for change helps you avoid breaking new features when fixing bugs in old features, and visa versa. The benefit of using Visual Studio ALM here is source is related to tasks and user stories, so you have the ability to run a variety of reports to see how change occurs.

    Agile: Reports and Features

    Currently ABC123 has very few reports. The tool they are using has a basic burndown chart with an estimated completion date, and whether or not the team is on track to its expected end date. It also contains a user productivity report that states all of the time logged in the tool. To gain further metrics is a manual process.

    Now, the word “reports” here is rather generic. The types of information I am talking about here are things like the following list. The items that can easily be determined today are shown with an asterisk. Items with a plus are items that can be determined today, but take a bit more work. Other items (bolded) require manual reporting.

    • Burndown (Scrum)
      Release Burndown*
      Sprint Burdown*
      Individual Burndown+
    • Cumulative flow  (Kanban)
    • Blocked work items and tasks
    • User work log*
    • User productivity in team
    • Velocity*
    • Code Coverage+
    • Code Quality Metrics
      This is not an all inclusive list, but covers a bit of the important features needed for ABC123 IT staff to complete projects.
      NOTE:: All of the items are technically possible today, if enough effort is put in.
    GIT Integration

    The company has no desire to move away from Git for source control and there is no reason they should. But, it would be nice to have the majority of the Git functions completely integrated into Visual Studio and to have the ability report on code in the same reporting system the project is tracked. In addition, it would be nice to be able to track what pieces of code were check-in with different tasks, work items and/or user stories. Visual Studio ALM is useful in all of these scenarios.

    Summary

    This blog entry focuses on the scenario for the rest of the series. From this point on, I am going to focus on a variety of topics, including:

    • Setting up GIT on user machines
    • Creating/adapting projects to use TFS with GIT
    • Agile project management with Visual Studio ALM
    • Next in series: Setting up GIT on WIndows
  • Peace and Grace,
    Greg

    Twitter: @gbworld
    YouTube: http://www.youtube.com/gabworld

  • Follow

    Get every new post delivered to your Inbox.