Table of contents for A Side Note
- A Side Note: The failure to test consistently is inexcusable by Howard Clark
- A Side Note: Why I dislike XY Scatter Charts so much! by Howard Clark
- A Side Note:Get Ahead of the Testing Infrastucture Build Out by Howard Clark
- A Side Note: Job Posting for Aeronautical Engineer – Have experience as an airplane passenger? You’re hired! by Howard Clark
- A Side Note: They might be right by Howard Clark
- A Side Note: “Do More with Less…Can Someone Tell Me How?” by Bobby Washington
- A Side Note: “Fighting To Maintain A Tester’s Integrity” by Bobby Washington
- A Side Note: “To Pull the Plug or Not, Who Knew The Life And Death Of a Computer Would Depend On A Tester?” by Howard Clark
- A Side Note: “Faith in the Machinery” by Ed Cook
- A Side Note: Open-Source or Commercial Testing Solution by Howard Clark
After the explosion of the Space Shuttle Challenger, Congress formed the Rogers Commission to investigate the causes of the accident, both technical and systemic, that led to the accident. One of the members of that commission was Nobel Prize winning physicist Richard Feynman. In his report, he wrote:
“It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask “What is the cause of management’s fantastic faith in the machinery?””
As software testers, we are generally hired to find problems in software and report them. From a day to day perspective, this is true. However, from a larger perspective, our job is to dispel the “faith in the machinery”.
Applications go into production with high and critical severity defects. This is accepted in the industry, under the understanding that it is an inherent risk of building complex software that is custom to the user’s needs. From an IT perspective, this understanding is usually based on the fact that those defects occur to other people. A parallel to the Challenger disaster is that managers at Thiokol (the contractor supplying the faulty part) and NASA don’t fly in the space shuttle.
In a perfect world, testers would feel no fear standing up and saying, “Look, this software isn’t ready. I wouldn’t trust you putting my important data in it, and neither should you.” In the real world, that tester will probably be soon collecting an unemployment check.
Instead, as testers, our only chance to bring faith in line with reality, is to gather hard data. While gathering metrics is generally not the most fun part of testing, it is often the only way to clearly show that there are fundamental issues that have not been resolved.
Don’t just show that there were 50 defects opened last week, which is 20% less than 3 months ago. Show that after three months, you’re still averaging 10 defects a week where users can enter data that will cause data corruption or break data manipulation processes (such as reporting background processes).
Don’t just show that the average severity of defects has gone from 1.83 to 2.04 over five months. Show that the same test run 10 times over 5 months failed in 10 different places, because fixing one thing keeps breaking something somewhere else.
And most importantly, when you find evidence that the application isn’t ready, take it to the development lead first and talk it over with them. Even if they don’t agree with your metrics, dropping a bomb on them in a project meeting is counterproductive. Delivering the truth in the wrong way can just as destructive as burying the truth.
Gathering these metrics are where root cause analysis, consistent tracking of severity and resolutions, and an honest appraisal of our testing results comes in. It does require more work on the front end (always err on the side of gathering more information, since you never know what you’ll need to report to back up a conclusion), and you can’t put “kept the customer from horribly botching their implementation” on your resume, but your career as a software tester tends to last longer if the company you’re working for stays in business.
And if that doesn’t work, consider suggesting that management read the final report by Feynman about the disaster (found here at http://www.fotuva.org/feynman/challenger-appendix.html). It is an excellent discourse in laymans terms how evidence was systemically ignored so as not to disrupt a rosy picture of safety and success.
“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
No Comments so far ↓
Your comments are welcomed.