Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 1

Stress Testing: A new definition? by Howard Clark

July 31st, 2007 · Uncategorized

The first thing we need to do is clear the air as far as terminology.  A stress test is a type of performance test that is thought to be synonymous with a load test.  But making the distinction and gaining an understanding of how this type of test should be conducted can help mitigate one of the biggest limitations in testing pre-production systems with the goal of predicting performance.  We already know that nine times out of ten we are not going to have a production system at our disposal to test against.  Historically, this has been one of the best excuses for poor performance testing efforts or a unwillingness to stand on a predictive performance number.  But looking at this as an opportunity instead of an obstacle can help us make more effective use of limited resources and actually achieve better test results more quickly.
So what you’re syaing is we can test better without a production system?  Well I wouldn’t go so far as to say better, that may be an exaggeration, I mean after all, what trumps production?  But the idea of a single-node of production might help come closer to meeting that requirement.  A node expressed both in terms of the number of servers involved, or a clearly linear relationship in the computing power with a minimal amount variance in the rest of the environment specifications. 
What does that mean?  It’s a change in the scope of the questions you will be able to answer.  A dual processor database server is going to give you a number crunching headache if you try to extrapolate out the results to a 64-way quad core box.  Good luck with that, although BMC has some interesting products that propose to do just that, I’ll leave that investigation up to you.  But what you can do is push that dual processor box to its operational limits and see what happens to your application servers or web servers as they wait for a response from the DB.  The idea would be to come as close as possible to a representation of the planned or exisiting production infrastructure minus some bits and pieces.  Ideally, in a multi-server environment you would have comparable web, app, load balancing, and network configurations.  It’s a lot cheaper to obtain two production web servers and an application server with a load balancer than it is to get the accompanying database server being planned for, or in, production.  This is a good thing because the volume, both in the amount of data needed and the testing time needed would mean having test windows as large as the operational production day or days.  But when you have a single-node of your infrastructure with a single component or components that are sized smaller than what is expected or in place you are truly putting the environment under stress.  The idea of stressing something not only means to inundate it with more transactions or users than it was designed to bear, but also to strip away some of the capability for dealing with that stress itself.
By taking away, versus adding to, performance under load can be observed with a “less is more” take on things.  While this requires a shift in what can be predicted it still has a great deal of value and serves as a work-around of testing infrastructure limitations.  Logistically this should be easier to do as production environment hardware purchases tend to be incremental, or at least they should be.  Executed early enough in the deployment you can create a series of performance snapshots each step along the way helping in the cost justification of additional hardware being added either vertically or horizontally.

→ 2 CommentsTags:

A Case for Evolution: The Performance Tester by Howard Clark

June 26th, 2007 · Uncategorized

When it comes to software testing I have some very strong opinions about the background one should bring to the table.  Not necessarily the educational background, as I’ve seen first hand how non-technical degree holders can hold their own, but really more of an emphasis on the experience they bring to the table.  I was asked the other day what  makes performance testing seemingly difficult or separate from more the more traditional functional based testing?  What  skill set is needed, what allows some to do it well and others not so well.  While this is not intended to be prejudicial or discouraging I’ll defer to my favorite quote, “It is what it is!” made popular by a contestant on a cooking reality TV show(Top Chef) I hate to admit I was addicted too even though I can hardly make toast. [Read more →]

→ 2 CommentsTags:

Why You Shouldn’t Performance Test Without Doing Code Coverage Analysis by Howard Clark

June 2nd, 2007 · Uncategorized

The business processes that need to be tested may be well understood and already decided on.  But how do you know if those processes really help mitigate the risk to the application?  Much like the issues with building the usage model in the absence of historical data (Building Usage Profiles Without Historical Data) there exists the potential for gaps to present themselves.  Obviously the risk grows in a custom application environment versus a COTS implementation, but that shouldn’t be taken as not having to worry about performance when you go COTS.  Vendors ship poorly performing code and/or poorly conceived sizing requirements all the time, so performance testing should be planned for in some capacity one way or another.  This is especially important when the implementation involves remote data sources or hosted applications with remote access methods. When the client has ownership of the code and/or the development process, code coverage analysis can take on a much more detailed and granular form with actual walkthroughs and re-compilations into instrumented code being possible.  In the case of a COTS implementation various logging levels inherent to the app and SQL database tracing can help fill in some of the blanks.  What’s important is that there is some type of effort around understanding how much of the application is being exercised.  The concern for functional coverage doesn’t begin and end with the functional test effort, as a performance tester you are not completely absolved from it.  It’s important that the story you tell is a complete and accurate one when the results are presented so that all involved have a greater sense of confidence.  Understand though that the point of this coverage analysis exercise is not to cover code blindly but to ensure that the pain points in the application are actually covered by the business processes that have been selected to script.  It is an effort to verify that what you think is going to cause a problem is actually being exercised. One especially useful tool out in the open-source community is EMMA for java-based apps.  Feel free to comment on your favorites!

→ No CommentsTags:

Building Usage Profiles Without Historical Data by Howard Clark

May 3rd, 2007 · Uncategorized

Before I begin let me share with you a a bit of wisdom from a great thinker.

“It is the mark of an instructed mind to rest satisfied with the degree of precision to which the nature of an object admits, and not to seek exactness when only an approximation of the truth is possible.” – Aristotle

I know this quote by heart because it acts as a sort of mantra helping me to ease my controlling and exacting tendencies.  It helps me stop myself from getting lost in the low-level details instead of being a big picture thinker.  At the onset of a performance testing engagement the ability to see the whole system and the broad array of usage and user profiles is critical.  Understanding the client’s business and discovering any possible impacts to the performance criteria before you begin the exploratory phase of testing for capacity via load and stress testing scenarios is important. [Read more →]

→ No CommentsTags:

Before You Walk in The Door to Performance Test (Knowing the Costs): Part Six of an Ongoing Series by Howard Clark

May 2nd, 2007 · Uncategorized

If I had known that so much would be required to even perform my duties as a performance tester I might have gone the other way.  Working on a triple-digit million dollar government project held by a company whose parent was a multi-billion dollar revenue earner was probably the best place I could have started this career path.  But in some respects it may have been the worst as my expectations were heightened from that point on.  We had every piece of software for testing you could want from two of the biggest application vendors in the space, and I would say “my cup runneth over” with hardware capacity for testing.  Fortunately for me, the costs of not doing performance testing had become well known and the perception issue was of the utmost importance to my employer.  This only served to set me up for the fall later, as I would discover that for every one client who cares about perception and correctness there are three who do not.

I was wrapped up in a sort of delusion that this was the state of the industry, performance testing and automation testing were must haves with big budgets and time to build frameworks and labs and the like.  Those public failures among the e-commerce giants and other web heavyweights had done all the ROI calculations for me.  No one wanted to deploy a poorly performing application, no one wanted an app that couldn’t scale, it would be disastrous, think apocalypse. [Read more →]

→ No CommentsTags: