Automated Testing is NOT optional


Once upon a time there was a young apprentice who did not understand why automated testing was so critical. As he was under pressure to churn out magic for an evil wizard, he felt he did not have time to test his spells. In addition, he did not know how to tell if the spells worked correctly. Thusly, he randomly Harry Pottered the incantations and hoped for the best. As he had learned not to fry off his own appendages, he felt this fire and observe method was adequate and did nothing to change.

A medieval tale? Nay, I say. This is today. And I am finding despite years of talking testing, and it’s variants, at conferences, on blogs and even in video courses, so many people still don’t get it. I will distill why automated testing is important and then see if I can drill into some details and get you back on the right course. And, if you are already doing everything I state in this post, then find a place with two for one spells and that second beer is on me. 😀

A cautionary tale

In this story, our hero has inherited an app. Well, not exactly, as the app has to be stabilized and released prior to takeover. The only thing standing in the way is testing. And, since there is precious little automated, the testing is manual. And slow. And error ridden. And … okay, I will stop there, as I think you get the picture.

In each testing cycle, as things are fixed, the testing gets deeper and deeper, and finds more bugs. These bugs were not caught in the original runs, as the manual tests were running into surface bugs. As the surface bugs disappear, the testers has time to start hitting fringe cases. And, as these are wiped out, the testers start testing at the same time and run into cross account bugs, meaning one person sees someone else’s data. Along the way, you see circular bugs, where fixing one brings up the other and vice versa.

Sound familiar? Every single one of these problems can be solved by automated testing. Every single one.

But, Greg, automated tests do not solve this problem. You still have to go after surface bugs (those on the happy path) and drill deeper.

True, but once I write an automated test, it is part of my library of tests which I can run over and over again. Yes, I can do this by creating test plans and checklists and manually testing, but – and here is the key – I have to rely on discipline to ensure all tests are completed each time. Plus, the automated test session is a few minutes in length (seconds, or less, in the case of unit testing) whereas my manual testing session may take hours. I have been playing this game for the past week and I have spent more than 8 hours, plus 8 hours of another employee’s time, to manually test a solution lacking automated tests.

Yes, there is the other side of the coin, which is the time necessary to write the tests. Initially, you take a productivity hit writing tests, as it takes time. They pay back dividends when you get to maintaining your solution, however.

Here is what I have found in the past week (all of which could have been found through automated tests):

  • Buttons not wired up – UI testing
  • Items that do not delete – Unit testing
  • Items that appear to delete, but stay – UI testing
  • Validation failures – Unit tests (backend) and UI testing (front end)
  • Items that cross accounts – Unit testing and/or UI testing
  • Slow reports – performance testing
  • Autosave issues – JavaScript unit testing

Every single test above could have been automated. Let’s start there and I will talk about maintainability and acceptability first.

Lions and Tigers and Test Frameworks … Oh My!

One of the first questions I get asked is “which test framework is best?” My answer: YES.

Honestly, I don’t care what test framework you use. I most often use MSTest, which is included with Visual Studio. I sometimes augment with a BDD framework like SpecFlow. But, I have worked with NUnit, XUnit, JUnit and LMNOP (which stands for “insert your favorite here”). I can tell you strengths and weaknesses of some of them, but I have yet to find a case where one of the ones I have been using for some time just sucked. Start somewhere.

In your toolbelt, you should consider, at minimum:

  • A unit test framework – in .NET, this will likely be MSTest, NUnit or XUnit – as mentioned, I generally use MSTest. Wikipedia has a nice listing here which covers a plethora of languages.
  • A test runner – Visual Studio has one built in. Resharper adds its own, which has some nice features (you use Resharper right?). NUnit comes with one, etc. In other words, don’t sweat the small stuff.
  • A mocking framework – I bounce between RhinoMocks and MOQ, but this has been a customer driven decision.
  • A JavaScript unit test framework – Unless you never do web or hybrid mobile, I would definitely add this. I happen to gravitate to Jasmine, but have played a bit with JSUnit, Karma and Mocha.
  • A user interface focused test tool – Selenium is the main open source tool for this type of testing. I have picked up Katalon recently to test, as we are on Mac and PC now. Katalon sits on top of Selenium and works in Chrome, which was a requirement for us.

A few other things you might be interested in

  • A BDD Framework – I use SpecFlow in .NET, but there is a bit of a learning curve. Plus, you can write behavior focused tests without making the BDD paradigm shift (see later)
  • An acceptance test tool – I happen to be fond of Fitnesse, as you can set it up with either Excel or a Wiki and let business add test conditions. It gets a bit strange when you tests magically fail because they have edited the wiki, but it sure shortens the feedback loop. Please note you can test for acceptance without a separate tool by making sure you identify all of the conditions
  • Some type of static code analysis – Visual Studio contains some tooling, which you can accentuate with extensions like Microsoft Code Analysis. I also recommend having static code analysis as part of your delivery pipeline if you use Continuous Delivery disciplines.

It is also nice to have a means of getting to fringe conditions in your tests – Intellitest in Visual Studio Enterprise (the evolution of Pex) is a great tool for this (Pex and Moles were tools from Microsoft research that were included in earlier versions of Visual Studio).

Well, I don’t accept that

What are you trying to get from automated testing? When I ask this question, the general answer focuses on quality code. But what is quality code? Certainly bug free, or at least critical bug free, is a must. But is that enough? I say no.

When we look at development and maintenance, maintenance is the long pole. The better your tests are, the easier your solution is to maintain. This should be obvious, as the tests provide guides to ensure any maintenance fixes avoid creating additional bugs you now have to conquer (think about those circular, fixing A causes B and fixing B causes A, types of bugs).

But I think we have to go beyond quality alone and think in terms of acceptability. This means you have to ask questions to determine what behaviors are acceptable. More important, you have to ask the RIGHT questions. Below is a sample Q&A with a business users.

User: A person can enter a number between one and one hundred in this box.

Dev: What if they put in zero?

User: It should warn them.

Dev: Same with negative numbers?

User: Yes, and everything over one hundred as well.

Dev: Can we make it a drop down so they can only pick one to one hundred?

User: We find drop downs get too unwieldy for our users when there are 100 numbers.

Dev: And no letters, right?

User: Exactly.

Dev: Can we put in something that ignores when someone types in letters in that field and text like the user name and password fields saying 1 to 100.

User: Oh, both would be great.

From this conversation, we understand that acceptance includes a user interface that stops a user from entering anything other than the integers 1 to 100. For many developers, adding constraints to the user interface would be enough to ensure this. But one should understand the business code must also contain checks, because code should never trust user input.

As you go through this exercise with your end users, you will find acceptability is beyond quality alone. It deals with perception, as well. And users will help you create a more comprehensive test solution. In summary, think both about maintainability and acceptability as you.

Software is a Bunch of BS

Some days I think of male bovine output when I see code, as some of it really smells. But when I talk about software being BS in conference talks, I am thinking about the dual aspects of code, which are behavior and state.

  • Your users are most concerned with state. When they pull out a record and change it, they want to be confident the new data is placed in the database, so they can find it in that state during their next session.
  • Your dev team is most concerned with behavior, or what they do to state (domain objects) as a user interacts with it. The behavior should result in the correct state changes.

This is why I recommend testing that focuses on behavior. I have, in the past, used a tool called Specflow when coding in .NET languages (SpecFlow can also be loaded directly into Visual Studio through extensions – I have the search here, as there are different extensions for different versions of Visual Studio. 2017 is here). SpecFlow requires a unit testing framework to work and works with many (MSTest, nUnit and xUnit are all included, although xUnit requires a separate NuGet package).

The upside of SpecFlow, for me, was when I pushed code over to teams in India. I could write the cucumber and generate the scaffolding and have the use BDD to create the code. For the first week or two, I would correct their tests, but once they got it down, I knew I could create the test scaffolds and get the desired results. The downside was the learning curve, as writing tests in cucumber takes some practice. There is also a paradigm shift in thinking from standard unit tests.

If SpecFlow is a bit too much of a curve right now, you can write behavior tests in a unit test framework by moving away from a single “UserTests” class with multiple tests to a single test per behavior. I will write a complete entry on this later, but the following pattern works well for behavior (I have the template outline in bullets and then a very simple example).

  • Class is named for the behavior being tested. In this case the class is named “WhenMultiplyingFiveTimesFive”
  • The code that runs the behavior is in the Class Initialization method
  • The variables accessed by the tests are kept in private static variables
  • The tests contain expected values and assertions

And here is the class.

[TestClass]
public class WhenMultiplyingFiveTimesFive {
    private static _inputA = 5;
    private static _inputB = 5;
    private static int _actual;

    [ClassInitialize]
    public static ClassInitialize(TestContext context) {
        _actual = MathLib.Multiply(_inputA, _inputB);
    }

    [TestMethod]
    public void ShouldReturn25() //This can also be ThenIGet25 or similar (cucumber)
    {
        int expected = 25;
        Assert.AreEqual(expected, _actual, string.Format("Returned {0} instead of {1}.", _actual, expected);
    }
}

The downside here is you end up with a lot of individual test classes, many with a single test. As you start dealing with more complex objects, your _actual will be the object and you can test multiple values. On the positive side, I can  generate a good portion of my tests by creating a simple program to read my domain models and auto generate stub test classes. It can’t easily figure out the When condition, but I can stub in all of the Should results (i.e. the tests). If you opt for the SpecFlow direction, you will generate your scaffold test classes from cucumber statements, like:

    Given I am Logged In
    When I Call the Multiply method with 5 as A and 5 as B
    Then it returns 25

Or similar. This will write out annotated methods for each of the conditions above, which is much like a coding paint by numbers set.

Process This

I will have to draw this subject out in another blog entry, as testing discipline is very important. For the time being, I will cover the basics you should include the DOs and DO NOTs of a good testing strategy.

  • DO write tests for all behavior in your system
  • DO NOT write tests for domain objects. If the language vendor has screwed up getters and setters, you have a bigger problem than testing. One exception would be domain objects with behavior (although in my shop you would have to have a really good argument for including behavior in state objects)
  • DO ensure you have proper test coverage to match acceptability
  • DO NOT use code coverage metrics as a metric of success. 100% coverage with crap tests still yields crap software.
  • DO ensure every bug has a test. DO write the test BEFORE you fix the bug. If you cannot write a test that goes red for the condition, you do not understand the bug. If you do not understand the bug, you should not be attempting to fix it.
  • DO ask questions of your users to better understand the user interactions. This will help you create tests focused on acceptability.
  • DO look at fringe conditions and conditions outside of the box.
  • DO add tests prior to refactoring (assuming there are not adequate tests on the code at the time).

None of this is automagic. You will have to instill this discipline in yourself and your team. There are no free lunches.

With legacy code, prioritize what needs to be tested. If you are replacing certain sections of the application completely, writing tests is probably a waste of time. Also focus on the things that need to be updated first, as a priority, so you have the tests completed. I should note you may have to reorganize a bit prior to adding tests. This is dangerous. But, using refactoring tools (Visual Studio or Rehsharper), you can lessen the risk. You need only pull things out far enough you can add unit tests, so extract method is your best friend here.

Summary

If you have never read the book The Checklist Manifesto, I would advice considering it for your bookshelf. The book focuses on how checklists have been used to save lives. Literally! Once you see the importance of a checklist, you can start thinking about automating the checks. Then, you will understand why automated testing is so important.

In this post, I focused on the very high level, mostly aiming at what the developer can do up front to greatly increase quality AND acceptability of the solutions coded. I strongly adhere to the idea of testing as much as is possible. Once I see a pattern in the tests, I generate them (often with Excel, which is a whole story in and of itself). But I focus on ensuring every condition I can think of is put in a test. I will use code coverage as a means of discovering areas I don’t have tests in. This does not mean I will create a test for every piece of code (domain objects being a prime example), but that I use the tools to help me help myself … and my team.

Peace and Grace,
Greg

Twitter: @gbworld

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: