Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 2

From the Field: Performance Testing Infrastructure Tip #1 by Howard Clark

November 30th, 2007 · No Comments · Uncategorized

Backup EVERYTHING!

Well maybe not everything, but at the very least the repository for your results, think “archive” strategy because results will pile up all over the place if you’ve been following the recommendation to take multiple samples .  In addition, the application and hardware that serve as the hub of your testing infrastructure needs to be fully administered.  More often than not, its a situation where we have a few machines that have gone unused and are probably on their way to retirement to use as our performance testing infrastructure.  As tragic as that may be, what is even more problematic is that these machines often carry that stigma as we perform our testing.  Disks go without defragging, no backup policy is in place, there isn’t any redundancy, and the host OSes are not robust.  Some of this is outside our control but other parts only require a simlpe fix.  A process scheduler and a decent imaging or backup program, all of which can be acquired in the base install of an OS or as freeware are simple remedies.  Its important that we care for our assets like any other piece of corporate computing hardware in the datacenter.  To lose a test iteration is one thing, but trying to push through support tickets when the timing is critical and the impact to the schedule is in days can really hurt if you need a whole new install from scratch.

 When it comes to what is commonly referred to as the “Controller” in the performance testing software world is actually a good candidate for installation on a laptop.  The desire to push the envelope for available memory will mean paying a premium, or at least did, but its well worth it.  Your CPU in this configuration won’t be that important until it comes time to crunch the numbers after the test, something that you can offload to another machine for another time if need be.  This works best when the software’s operation is decentralized with your Controller only serving as the orchestrator of the test, and not so much as where all the stats will be compiled, logs stored, or monitoring taking place.  Weigh the needs of your testing software appropriately.  But what you gain in having inherent battery backup and portability can makeup lost time in the event of a failure.  Plus, and I’m whispering here, it could open the door for you to run tests remotely, if remote control of a desktop behind the firewall is not available too you.

With the exception of the vast amounts of monitoring and the infrastructure that supports it, whether provided by a third party or out of the box, agents and the processes responsible for transaction generation are relatively light installs.  In the case of the transaction or load generators that install may find itself living on many different machines and OSes during it’s lifetime so less emphasis is placed there.  Where monitoring is concerned I would recommend that the agent or monitoring configurations be saved off and/or fully documentated should they need to be restored.  Even within the reams of documentation there are often gaps or quirks with each install and SUT(system under test) to warrant some documentation of how things were configured as far as probes and options.

 

Tags:

No Comments so far ↓

Your comments are welcomed.

Leave a Comment