Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 1

Performance Testing Wilderness: Hold Hands and Make a Ring Around the Campfire by Howard Clark

July 20th, 2008 · Uncategorized

Table of contents for Performance Wilderness

  1. Performance Wilderness: Look up on A Summer Night and See The Stars by Howard Clark
  2. Performance Wilderness:Setting Up Your Performance Testing Camps by Howard Clark
  3. Performance Testing Wilderness: Hold Hands and Make a Ring Around the Campfire by Howard Clark

(2 Factor Orthogonal Array Example) The ties that bind us together, our networks, connecting the disparate dots on our application landscape allowing us to process transactions from around the globe.  Imagine a circle of people holding hands moving in unison around the camp fire, a perfect symphony of movement as eveyone dances around faster and faster until the circle becomes a blur of movement.  It’s not until someone’s shoestings come untied, followed by a stumble and a loud thud that we start to realize that we might have been standing too close to the open flames.  This is the type of analogy that comes from a mind warped on videogames that add flamethrowers to your arsenal, games which I only play to discover software bugs and not as mindless entertainment for hours on end by the way.  Back on subject, our application network is only as robust as its weakest link, an adage that has held true for thousands of years.  Unless confronted with hard performance requirements this is another one of those areas that is so easy to take for granted in the context of having tested intranet applications that reside behind the corporate firewall.  In reality this is even more precarious than testing an external facing application whose end user is on the other side of the world wide web.  I say that because if you want a simple worse case scenario all you need to do is govern the bandwidth that you conduct your test with when simulating an external user.  You can take the broadband penetration numbers and scale accordingly using the lesser of the average connection speeds.  Right now Nielsen/NetRatings pegs broadband penetration at about 33% of American households, this is an abysmal number compared to other countries leaving us at an overwhelming majority of households at less than broadband or at 56kbps worst case.  So if you wanted to cover your bets, all of the external users in your test would run at 56kbps.  Obviously all of this could be trumped by a hard technical requirement that states the minimum connectivity speed. But when it comes to internal apps behind the firewall we tend to not even gather network stats because we are on the corporate LAN. This false sense of security can be shattered with one misconfigured router, an incorrect duplex setting on a NIC, bad cables, or poor network connection handling in the application. Complicating this is the infrastructure behind the corporate firewall, then on the other side of the firewall you have the possibility of caching servers and services(Akamai), and possibly local caching on the client-side of the application puzzle.

In a vacuum, a component test of a piece of business logic may produce results that prove to be acceptable. But when the result of that business logic’s execution ends up in a datagrid in the UI that is waiting to be rendered because the data hasn’t made it across the wires in a timely fashion you end up with a performance problem.  This means you absolutely have to take the render time into account from an end-user perpective and create a 360-degree view of your application performance which absolutely should include your network or the internet’s performance. 

This now adds an additional factor to your testing array where you can represent various types of network speeds using less-than broadband, broadband and enterprise levels of connectivity.  This then opens up to the addition of WAN topology and modes of transport such as wireless.

Don’t get burned.

→ 1 CommentTags:

A Framework for a Performance Testing Engagement by Howard Clark

June 20th, 2008 · Uncategorized

In helping various collegues and companies manage their performance testing efforts I’ve come across a reuseable set of common activities that have helped me explain and divide the work necessary for a successful engagement.

The following activies:
Engaging in Risk Identification
Developing a Performance-Testing Strategy
The application of the Performance-Testing Framework which produces a Performance-Testing Plan

Assessment
Modeling
Execution
Analysis
[Read more →]

→ No CommentsTags:

Code Snippets(HP LoadRunner): Decrypting Passwords and Using those Values in a web_submit statement by Howard Clark

May 7th, 2008 · Uncategorized

Table of contents for Code Snippets(HP LoadRunner)

  1. Code Snippets(HP LoadRunner): Randomly Selecting an Array of Values and Using That Value As A Parameter by Howard Clark
  2. Code Snippets(HP LoadRunner): Decrypting Passwords and Using those Values in a web_submit statement by Howard Clark
  3. Code Snippet(HP LoadRunner) : FILE I/O and creating dummy XMLs

declare either as a global in the globals.h available in multi-protocol scripts or outside of the vuser_init

//placeholder variables
char *Temp;
char decrypted_password;

//Retrieves the paramater value and decrypts it.
Temp = lr_eval_string(“{UserPwd}”);
Temp = lr_decrypt(Temp);
//optional output of decrypted parameter value
//_output_message(“%s”, Temp);

//creates a new parameter that can be used in a text string with paramater delimiters
lr_save_string(Temp, “decrypted_password”);

For example:
web_submit_data
.
.
“Value={decrypted_password}”, ENDITEM

A trivial but useful code snippet to meet security requirements when the login credentials are being passed outside the company in a hosted Performance Center solution and/or where electronic submission or network storage is being used.

→ 1 CommentTags:

A Side Note: They might be right by Howard Clark

May 4th, 2008 · Uncategorized

So here is the article:

‘Testers Are Idiots’
By Edward J. Correia

http://www.sdtimes.com/content/article.aspx?ArticleID=31789&AspxAutoDetectCookieSupport=1

I’ll go on record once again as saying the hiring of anyone other than a software engineer to do software testing is pure and utter folly. If I wanted an over-glorified UAT I would have my SMEs take a few weeks off and devote themselves to being “software testers.” Otherwise, if I’m validating a technical interpretation of high-level requirements I would scour the company for developers who can communicate effectively. The gap between the developer and the business has left a window open for a hybrid type of employee that can bridge that gap. Unfortunately companies have failed to value that skillset appropriately, in turn drawing in folks that can tie their shoes and tell you about it but have no idea of the physics behind the knot. One would argue that we don’t need to understand the quasi-physical process that makes it work, just tie the knot and go right? Can you argue the same point about software?

Once again another over-simplified analogy, is the office worker who takes a detailed tour of a building qualified to inspect the structural engineering behind the building?

I hope your answer is no.

→ No CommentsTags:

What stage of your SDLC are you testing for? Development, Test, or Maintenance? by Howard Clark

April 19th, 2008 · Uncategorized

    The Application is in Development

The performance-testing framework can be utilized early on during the Development Phase, by leveraging specific test types to address any particular areas of concern discovered during the assessment stage of the performance-testing framework. At this phase testing speaks to the application’s scalability and capacity by executing a profiling exercise. This proves to be an area of critical importance when discussing the recommended overall performance-testing strategy implementation. An introspective look at the type of testing that should be performed during the Development Phase yields the following observations, the first being that the developer unit-testing should be structured, time allocated organized, the second being that performance-testing should be performed in conjunction with this testing. It is the combination of these two forms of testing that will yield the highest degree of insulation from deployment risk at the onset. Typically, failing to utilize the performance-testing framework early on has resulted in performance-testing taking place at the end of the Devlopment Phase or later. This idea of only performance-testing an application that has passed the functional testing requirements usually yields little time to complete the performance-testing required, and making corrections should the post-test analysis identify a bottleneck or worse yet a poorly performing architecture overall, there will be little time to fix it. The test types most applicable at this stage are baseline, single-node, and component-level testing. This testing will provide the subjective and objective inputs need for the root-cause analysis effort later on.

    The Application is in Test

Once the application has entered formal functional testing it should be complete enough to run the baseline, load, stress or volume tests against it. The models that represent the estimated usage of the system should be complete to allow for the creation of a battery of comprehensive real-worl scenarios. The tests should provide information to level-set expectations for the end-user response time and provide data for the top-down analysis effort. This can be followed up with component-level testing and the other test types native to profiling to investigate findings. This activity should be scheduled for multiple iterations.

    The Application is in Maintenance

At this point in the application’s lifecycle any effort made to performance test should be a tuning exercise that would depend quite heavily on the tests used when adopting the profiling methodology. We should have a general idea of what areas are performing poorly and can begin component-level testing to investigate.

→ No CommentsTags: