Grand Illusions in Software Testing
Entering high school as a freshman in 1977 came along with new friends and new music, including one incredible rock-n-roll band named Styx. They had just released a new album with the fantastic title track, The Grand Illusion.
In the world of software testing, a grand illusion also exists. It appears from a distance to be a chart-topping hit, yet in reality, it’s a disorder that can infect an entire testing organization. The epidemic can even receive accolades due to the outward appearance of considerable activity yet, in the end, reduces little risks and can even introduce new risks into the product lifecycle.
Consider for a moment that good and lousy testing can look strikingly similar from a distance. It’s often made up of busyness, launching Apps and browsers, engaging all sorts of tools. Yet in all this activity, it’s reasonable to ask, is anything actually getting tested?
The following are useful considerations for the Test Manager gauging whether their team is applying sound testing principles or merely a case of mistaken identity for a local cover band instead of the real thing.
1) Plan your test, then test your plan
If there were only one point you decide to take away from this article, the following would be the most important. Teach your team how to plan their testing. Testing without a plan is like going into battle without a plan. Sure, you might get a few shots off, but the risk you’ve taken in this Rambo-like moment will likely end swiftly and tragically.
There are many tools and approaches to test planning. Find one that fits within your budget, culture, and organization, then commit to it. The team I work on uses METS, the Minimal Essential Testing Strategy (METSTesting.com), but many options exist. Be determined to create a plan and then follow it in your testing efforts. Remember to allocate time for exploratory testing, but don’t make it your entire approach.
2) Commit to keeping Test Cases up to date
Along with your Test Plan should be the supporting Test Cases. These should provide details for testing the functional areas of your application. Build into your team’s weekly routine time to keep test assets such a Test Cases and Test Plans up to date. Updates need to be something the entire team participates in, not just the new guy who gets “stuck” with the paperwork. Identify those who will lead this work by example and mentor less experienced engineers in this work.
3) Write reusable Test Cases
Somewhere along my testing journey, I encountered the oddest thing, Test Engineers rewriting test cases for every new release. While this might be a consultant’s dream come true, this unnecessary busywork can seem legitimate if you don’t ask questions. Focus on writing test cases in a way they are accessible and reusable to your team.
4) Organization and Accessibility
Name and organize Test Cases to easily be accessed, updated, and utilized for current and future work. Ask team members about the work they’re doing around managing test cases. Be wary of overly complicated approaches or responses that test cases are memorized, so documentation isn’t necessary.
5) Don’t be fooled by tools
The Quality world has an abundance of tools. From “Freebies” to costly ones, if you’re leading a quality team, you’ll be inundated with tool possibilities.
Whether it’s a well-meaning team member or “Slick Willy” salespeople, tools can often sound like the solution to save the day. Have a clear understanding of what you’re trying to accomplish with tools. Some have many Wizbang, bells, and whistles yet don’t give testers any significant advantage and, worse, can distract from meaningful testing.
Tool usage can appear legitimate from a distance, with sparkling reports and volumes of data. Yet all that pizazz may not be telling you anything meaningful at all. If you’re going to buy a tool, work with the vendor to get a working trial to kick the tires and ensure it’s the right tool for you. Allocate a member of your team to honestly review the tool against needs you’re trying to solve.
If you consider free or Open Source tools, consider if they are well supported. When was it last updated? How large is the community that supports and keeps it going? It’s awful to get dependent on a tool not updated in years, where the organization that built and maintained it has gone on to greener pastures.
6) Automation without Evaluation
Automation engineers are a unique bunch. I’ve been working in some form of automation for over 35 years and have seen just about everything you can imagine automated.
An essential fundamental to building any automation is to ensure the thing you want to automate is a good candidate. Automation is the one area I see the grand illusion surface more than any other place. The reason is simple; automation is typically fun to develop and often measured by how fast it gets a task completed.
So what if the automated task is not reliable, a term we call “flaky?” What if very little is getting tested in all that clicking and changing of screens? Automated execution is mesmerizing to watch like a magician performing sleight of hand. The “test” completed in 8.5 seconds, so it must be good, right?
Use this simple rule when considering what to automate, focus your development efforts on the critical tests. Review each one before a line of automation code is developed to determine if it can reliably be automated and maintained over the long haul.
7) Rogue Test Automation
Rogue test automation is automation typically built without much oversight or long-term consideration. One day it just shows up when someone’s on vacation, and an alternate Test Engineer or Developer is trying to find and get the “Automation” run.
Rouge test automation comes at multiple unanticipated risks. The first is that it’s called “Test” automation. Its very name sets an expectation that this thing that lives on Bob’s computer must test something. Typically, well-meaning people create rouge test automation. Like all automation thought, how sustainable it is and who will maintain it once Bob moves on to greener pastures is overlooked.
Talk with your team about automation. Don’t forget to include the reality that automation can be a lot of fun to create but comes at some costs to the team. Once you write automation, you will (guarantee it) have to maintain that automation in the future.
8) Exploratory and Ad-hoc testing
I’ve yet to meet a Tester that doesn’t love exploratory or ad-hoc testing. This type of testing encourages the Test Engineer to go where their curiosity leads. It’s fun and alluring, bringing a high all of its own.
This type of testing is also typically done without any plan. From the untrained eye, it looks precisely like planned testing, and this is where the grand illusion of this testing surfaces.
Have an intentional talk with your Test Engineers about how much time goes towards exploratory testing vs. planned testing. A good ratio to start with might be 80% planned and 20% exploratory testing. Adjust these numbers to fit your objectives, then discuss them with your team.
Look beyond the busy activities of your Testers. Determine if the right testing is getting done that genuinely reduces risks to your products. Be aware that a grand illusion exists and can go unnoticed by an untrained eye yet come at the cost of an encore performance.
Pursuing the Craft of Testing, join me in this journey.