Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 2

Before You Walk in The Door to Performance Test (Knowing the Costs: Open-Source aka “Free” vs. Commercial): Part Seven of an Ongoing Series by Howard Clark

January 21st, 2008 · 2 Comments · Uncategorized

Software and Hardware are expensive stuff, all the more so when you take a long hard unbiased look at how management views performance testing and software-testing and/or QA in general.  Unfortunately, save for the most software intensive organizations, that view isn’t a good one.  Unless time is money (Banks, Brokerages, Exchanges), lives are at risk (Hospital, Pharmaceutical, Military), or appearances need to be upheld (Government, Software shops, Retail or large volume web-based transaction generators) performance testing is the last thing on a QA manager’s mind.  To those of you who vehemently deny this, I say “Great, feel free to contact me for work.”  The thing is, my viewpoint will probably be confirmed.
So where does that leave the performance tester?  “We’ve got a decommissioned server running Windows NT with a whopping 1 Gig of memory, single-core and a 10Mbit/sec network connection.  Don’t forget the open source package we found on Google to do our testing with, no one in the organization knows how to use it much less support it, good luck!”
Now I’m a snob, a self-admitted one so it comes as no surprise that I say walk away when these types of contracts or assignments come your way.  If it’s one thing I understand it’s this, if an application performs well the likelihood of bugs is low.  The reasoning behind this being, if you take the fastest medium on the planet to convey thoughts and processes and it runs like a Model T in a world of Bugatti and Ferrari you have written some poor code and made some poor architectural decisions.  Throw in some poor business processes and/or functional touch points with that and the picture is painted.  I find that code that truly performs well, sans the GUI, when you’re looking at it under a profiler usually has fewer bugs.  Maybe it’s a matter of the varying degrees of complexity of all the different apps I’ve seen and worked with but it is what it is.  I’d go so far as to say the development IQs of teams that produce well-behaving and apps that perform well are higher.  Going back to the point above, it’s hard to ruin an application’s performance, a lot of things however simple have to go wrong.
So with the postulates behind us what does that leave us with, why would I advise you to just walk away?  More poor decisions usually follow, and the poorest are software and hardware selection to performance test that leave me scratching my head.  As Grandma used to say “You always get what you pay for and what you put in.”  That last part is a very important modification to the adage; it means that the effort is also a factor versus price alone.  If an organization wants to go open-source by all means go for it!  Don’t forget to build an organizational capability to support, develop and enhance the product.  Don’t forget to train resources, acquire resources, and maintain resources to support this endeavor.  Please factor in the viability or lack thereof the package selected in your stated project risks.  Allow for extra ramp up time and afford people the time to build stop-gaps for stringing together these products and their artifacts once the task is done.  Know that “something for nothing leaves nothing.”  Yes, yes, truly commoditized software apps do lend themselves to open-source quite readily.  But we’re not talking about a commodity, we’re talking about competitive advantage, we’re talking about the public face of the company or internal enterprise-wide efficiencies.  We’re talking about reducing the TCO (total cost of ownership) by recognizing resource hungry applications and where they can be improved.  We’re talking about the validation of the great virtualization movement underway.  Gaining “on the ground intelligence” about the adoption of a new but old platform for computing. 
The costs are high all around the board, if you want to address cost strictly from a pricing perspective that’s one thing, but an overall understanding of platform adoption, training, hardware, support, and on-going maintenance needs to be addressed.  There is a reason companies exist and have revenue streams, it’s more than their marketing it’s the trust, relationship building and demonstrated performance that put them in the positions they are in.  Understanding that you only get what you put in ultimately is the key.  The cost is going to be there regardless; it doesn’t matter where you shift it.  I prefer a one-time upfront cost where it’s defined and relatively finite.  I’ll buy the best package I can get, knowing that on the back-end I may be able to save because I don’t need a super-human resource to get the work done.  I’ve seen example after example here close to home where some of the dumbest resources I’ve ever worked with can still be as effective as me in certain parts of the job because the tool is the great equalizer.  Doesn’t hurt my pride one bit, if anything I’m glad because it furthers the adoption rate.  It lessens the individual burden of extensive knowledge transfer because it’s all captured by the tool in one form or another and left for the next resource to assimilate.
If however, your organization commits itself by building the competency internally per the items above or secures that competency externally, see the company Red Hat, by all means proceed.  Just know that as the performance tester you need to have some situational awareness around the expenditures that were made and the resultant expectations.  Don’t be surprised if either the senior guy walks out and you’re expected to step up and perform at the same level, or the senior guy walks out and everything is in disarray because he/she was the only one who invested in learning this new package and no one provides support save for a few forums online.
Knowing what the cost is means appreciating the entirety of the situation you are in; let’s hope the cost doesn’t leave you seeing RED and feeling NEGATIVE about the space.

Tags:

2 Comments so far ↓

  • Bobby Washington

    Part Seven in this series was not only exciting but incredibly beneficial for me to read. Until this article I supported open-source software without truly counting the costs. Don’t get me wrong, I still support and enjoy discovering, researching, and utilizing open source software testing tools. However, I know have a holistic view when assessing the cost of utilizing open-source software testing tools. Previously part of my argument for supporting open-source tools, like most, was the fact that it was free. I mean come on, who wouldn’t adopt a tool that could do the job effectively and was free. Well the statement below was the key to my enlightenment with respect to open-source and counting the cost:

    “The cost is going to be there regardless; it doesn’t matter where you shift it”.

    This statement alone helped me realize that my support and view of open-source was
    myopic. I never counted some of these costs associated with the adoption of an open-source tool. Such as ramp up time for competency, and support or the lack there of. Counting the cost is a delicate balancing act that demands a thorough understanding of both the potential and direct impacts open source as well a commercial adoption has to a project overall. I realize we must ensure this understanding is not only clearly communicated but also reiterated with as little ambiguity as possible. The word free is a powerful word in our English language. It compels people to actions almost as primal as the very air you are breathing. You don’t think about it, you don’t make a conscious decision to do it you just do it. Going forward I will make the conscious decision to realize that while adoption of an open-source tool may minimize up front costs, I now know to ask the question “to where did the cost shift”. Adopting this thinking will position me to provide better information to my clients that will in turn enable them make better business decision, a win-win.

  • Howard Clark

    I’m glad you found the post useful, all too often we are pushed to provide solutions in the near-term without any regard for the long-term implications. In the enterprise there are a whole host of issues to take into consideration. You never want to invest time and effort into something that won’t be anything more than a throw-away project unless it is clear that it will be jhust that. Otherwise always error on the side of long-lasting, scaleable and configurable solutions that have a strong following and strong vendor support.

    There is a reason why the SAPs and Oracles and HPs of the world exist.

    Caveat Emptor even when it’s free!

Leave a Comment