Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 2

A Side Note: The failure to test consistently is inexcusable by Howard Clark

October 23rd, 2007 · No Comments · Uncategorized

Over time everything becomes clear, and if there is one thing I am grateful for it is the inclusion of science as a part of my academic background.  That classical training and rigor that seems so second nature to me, I normally just chalk up to being common sense, something missing in a lot of the testing that is done these days sadly.  Much like developers who on their best day are just a bunch of hacks, there is an equivalent in the testing arena.  But in performance testing there can be no leeway for haphazard, undocumented, ad-hoc, off-the-cuff, press start without knowing where you finish testing.  The reason I say this is to point out a clear difference between something that may functionally deficient versus something that doesn’t perform well.  Both are equally weighted, anyone who argues that there is a difference in the importance of correctness in both is dead wrong they are complimentary after all.  The difference lies in the way people respond to a functional failure versus a failure in a system’s performance due to the way these errors manifest themselves.  It’s a play on the psychology of the user, where an error message associated with a functional defect gives a user something to go on, at some level the user says ok, I can send this to the developers and they can fix it.  A user can become accustomed to a certain error rate and develop workarounds until the issue is mitigated.  But when an application fails due to a lack of capacity the user tends to get a very foreign response or nothing at all, resulting in the under the breath cursing that describes anything that appears to not work at all.

Due to that response or lack thereof the urgency to correct an issue takes on a certain fervor so let’s not compound the issue by failing to test efficiently.  I understand that pressure can mount when mission-critical systems and over-promised timelines collide, but it should always be understood that the adherence to a strict and unbending methodology behind testing has to come first, in fact it must come first in order to achieve usable and viable results.   This is especially true when troubleshooting and performing comparative testing in an effort to reproduce errant conditions.  How can it not?  So let’s put on our white lab coats and turn to the professionals and see how they do it. In our case we are talking about a controlled experiment of sorts, which will support a high degree of replication between test instances.  In order to achieve these conditions the network, database, application ready state (caching of objects, initialization routines complete, ramp-up into the application, etc.) and hardware need to be normalized to a known state.  Once this is in place only the variables that need to change to verify a hypothesis or best guess at what the expected outcome should be can be made.  When I say variables I’m only trying to account for multiple changes to bring a larger single change into effect.  For example, you change SAN providers, multiple hardware components may need to change but only those in the context of changing hardware platforms should be a part of that change.  Let’s say you introduce more physical memory, well the front-side-bus speed may need to be changed to take full advantage of that memory.  The controlling variable is still the SAN or the memory addition in both cases.  This elimination of extraneous influences, leaving only the item we’re interested in testing, brings more control into our testing.  To replicate the testing over a minimum of three samples per test we simply need to get back to the same state we were in when before we ran the prior test.  This is most easily achieved by a full system restart and a re-load of any data elements to a pre-established baseline.  So if you are starting with an empty database or a pre-loaded database revert back to the empty DB or snapshot before you run the second test, or create data during you test in such a way as to allow it to be easily identifiable and delete that data afterwards.

If you can’t answer the question of “What changed?” in ten seconds or less then the test will begin to lose credibility.  Whatever you do, never change your test approach between testing iterations.  This should go without saying but it happens, so whatever your test assets were when you started testing should stay the same until your testing is completed including the testing infrastructure itself(upgrade of testing tools included, as I’ve stated before the testing infrastructure can impact the test positively and negatively).  So, adding an additional script, in effect changing the usage model; or re-allocating functional pathways in the usage model to get a better read on the change SHOULD NOT HAPPEN!  If this becomes necessary then the prior test iterations should be discarded and a new baseline would need to be established.  That means reversing out the change under test back to the baseline configuration, running a new battery of tests incorporating the changes to the test approach, then re-introducing the change under test and proceeding to test that change.  Simply put, this means re-work, so try to avoid this at all costs. Sometimes this happens because of a certain bias towards the expected result, that bias is cited as a prevailing mistake in the application of the scientific method or good test design.  Common Mistakes in Applying the Scientific Method 

Always document the following:

Version numbers of all the software involved all the way down to the minor release.

The build Number and expected version of the application under test.

Everything that has the potential to grow during the test

Snapshot of the system before testing begins.

Any potential conflicts in batch or job processing(unless required avoid these conflicts).

System configuration (Memory, CPU, Disk, Network, etc..).  

 

 

 

Tags:

No Comments so far ↓

Your comments are welcomed.

Leave a Comment