Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 1

From the Field: I Don’t Know Where I’m Going, Or How I Got Here, But I Know Where I Want To Be. by Howard Clark

April 19th, 2008 · Uncategorized

Table of contents for From the Field

  1. From the Field: I Don’t Know Where I’m Going, Or How I Got Here, But I Know Where I Want To Be. by Howard Clark
  2. From the Field: Performance Testing Infrastructure Tip #2 by Howard Clark
  3. From the Field: Performance Testing Infrastructure Tip #1 by Howard Clark
  4. From the Field: LoadRunner and Citrix

For the first time I got performance criteria from the client that made sense in the vacuum of existing usage data. It amazes me that this is the first time this has happened in almost six years of my experience in the field. We all know by now that modeling the projected usage is an exercise in arbitrary estimates and guesses. While I’ve given you a few ways around this in the past Building Usage Profiles Without Historical Data the data point I like the most is the internal company performance objective. This kind of number resonates across the entire business and bridges the gap between the application under test and ROI. Unlike hardware based statistics, or even response times outside of a true business context, these performance goals speak to where the company would like to see itself. A model of efficiency burning on all cylinders devoid of waste or time spent doing anything other than working. Those monthly, quarterly, and yearly goals that are set by the business that get refined down to the departmental level are a key source of application performance criteria if you can map them appropriately. This data may not even make the rounds in the normal channels as your business analysts may be attempting to drill down for a specific response time measured in seconds as they glean think times from empirical data. This more often then not leads to a lot of effort for building what at best, could be called educated guesses. Assuming that a company doesn’t set goals aimed at underachievement we could also assume that those goals would provide a reasonable cushion between the current state of operations and some level above. This helps rein in our testing so we don’t over-stress or get into a situation where the test is unrealistic. Just to be clear this is being framed as the load or volume test usage model gathering activity, versus a stress or component-level test/single-node test.

“The claims department will process 10,000 claims per month with an expected staffing level consistent with the numbers coming out of fourth-quarter 2007.”

We have a five day work week, with 2 eight hour shifts, with 20 working days in a given month. At 500 potential claims a day over a 14 hour work-day as the shifts overlap that leaves is with 36 claims per hour. Assuming that the distribution is a bell shaped curve with the majority of claims processing occurring over 2 hours of that shift at say 50% we end up with 250 claims over 2 hours. At a rate of 125 per hour over that two hour window and a staff of 5 we have 25 claims per user in the course of an hour. This is a processing rate of one claim per 2.4 minutes. From our standalone baseline we know that sans think time it takes 45 seconds of request/response time to walk-through the most-efficient happy path to get a claim processed. This allows for a distribution of 99 seconds of think time throughout the script. I do not abdicate trying to get down to applying the think time on a level anymore granular than a even distribution between measured transactions in an entire business process. So, if it takes eleven steps to process a claim as defined by the business analysts and user community then that makes for a 9 second per transaction think time within the business process.

Simple math when you have an intelligent data point to start with. Try going on a dig for this type of data when you don’t have much to start with, if your client is concerned about performance testing, they more than likely have stated performance goals for the business. Even if it’s a revenue goal, a decomposition of how the client generates revenue can still lead you down to where you need to be. Top-Down thinking is a must when working with abstractions.

→ No CommentsTags:

From the Field: Performance Testing Infrastructure Tip #2 by Howard Clark

March 29th, 2008 · Uncategorized

How do we go from debugging a “parameter not found” error to updating our test software by applying a patch? Well, sometimes you have to go back to the basics. Sometimes you have to ask the “stupid” questions; of which there are none in my opinion, to check your ego back in place and look at things from the ground up. If you’ve ever worked on the help desk handling day-to-day user issues and complaints the script is a familiar one. “Hello, yes, my printer won’t print”, are the drivers installed? “Yes, the drivers are installed; I used the CD that came with the printer.” Good, is Windows showing any errors? “No, the job goes to the printer status box.” As you start going through the script to resolve the problem the first question and seemingly insulting one is is the printer powered on? See this user appears to be fairly savvy, he/she has demonstrated some knowledge about how to troubleshoot. What this user has taken for granted is that the lightning strike last night blew the outlet out for the printer. What you’ve taken for granted is to start with the basics. The simplest most fundamental questions need to be asked and answered just as a best practice. It can save you a lot of head pounding and teeth gnashing over issues that end up with you saying “I should have checked that first, how (beep beep beep) of me!”

The other benefit of not taking things for granted is that you will be forced to increase your understanding of the way things work, and practice forming logical workflows in your mind. To get back to our issue it’s one that in the past wasn’t likely to occur because the typical engagement requires me to build/deploy hardware assets, define the network topology, install and configure the software, and do a dry-run and assess capacity of the PATI(performance and automation testing infrastructure). But as companies turn to outsourcing their internal IT operations and delineate infrastructure and deployment tasks between groups more cooks end up in the kitchen. Now if I’m in a kitchen full of chefs and I have to hand off a piece of the recipe for someone else to execute I fully expect that person to execute accurately. If the recipe calls for basil, I’m not going to check to see if its parsley. That attitude will serve as the root cause of my problem later. Once again, take nothing for granted as a matter of policy things should be verified even when experienced and capable people or tools are involved. So test execution fires off and of the four load injectors in place only one is throwing an error. It is only because of this single instance that I immediately move in the direction of trying to verify what is different on this machine, and after checking a lot of test specific items and not finding any differences I force myself to start examining everything about the machine from a hardware perspective. Maybe its some bad blocks on the disk, maybe file corruption, maybe network performance, maybe it’s the OS, maybe its the software mix on that box, or maybe its EUREKA! The testing software itself, maybe my fellow chef failed to install the latest updates and service packs.

What that has to do with a “parameter not found” issue? I have no idea and don’t want to know; it seems like a silly error and shakes my faith in the product so I haven’t delved into it. What I did come away from the situation with was an appreciation of starting from the bottom up with the simple questions first. Go back to the basics of troubleshooting and work your way up to the specific circumstances. Take a more methodical approach instead of so much free-form thinking, lean on the fundamentals and ask the “stupid” questions. Sometimes they reveal the best answers.

→ No CommentsTags:

Code Snippets(HP LoadRunner): Randomly Selecting an Array of Values and Using That Value As A Parameter by Howard Clark

February 6th, 2008 · Uncategorized

Table of contents for Code Snippets(HP LoadRunner)

  1. Code Snippets(HP LoadRunner): Randomly Selecting an Array of Values and Using That Value As A Parameter by Howard Clark
  2. Code Snippets(HP LoadRunner): Decrypting Passwords and Using those Values in a web_submit statement by Howard Clark
  3. Code Snippet(HP LoadRunner) : FILE I/O and creating dummy XMLs

int TotalNumberOfBUIDs;
char TotalNumberOfBUIDschar[3]; //working variable
char *AvailableBUIDsparam; //working variable

web_reg_save_param(“AvailableBUIDs”,
“LB/IC=class=’PSSRCHRESULTSODDROW’ >”,
“RB/IC=”,
“Ord=All”,
“Search=Body”,
“RelFrameId=1”,
“Notfound=error”,
LAST);

TotalNumberOfBUIDs=atoi(lr_eval_string(“{AvailableBUIDs_count}”));
TotalNumberOfBUIDs = rand() %TotalNumberOfBUIDs;
lr_output_message(“%d”, TotalNumberOfBUIDs);
itoa(TotalNumberOfBUIDs, TotalNumberOfBUIDschar, 10); //working variable conversion

lr_save_string(TotalNumberOfBUIDschar, “BUIDindex”);
AvailableBUIDsparam = lr_eval_string(“{AvailableBUIDs_{BUIDindex}}”);
lr_save_string(AvailableBUIDsparam, “BUIDs”);
lr_save_string((lr_eval_string(lr_eval_string(“{BUIDs}”))), “BUID”);

“Name=VCHR_ERRC_WRK_BUSINESS_UNIT”, “Value={BUID}”, ENDITEM, //application of the value that was captured and then randomized

The AUT is Peoplesoft Financials 9.0 using the HTTP/HTML protocol implemented in C.

The idea is to take a list of values that were retrieved as a set of search results, and randomly select one of those search result values. There would also be a check as to whether that value was blank or not depending on your needs. I typically implement a “no results found” check in the content check rules in LoadRunner to make them more maintainable versus a bunch of exception handling, a virtue of working with text streams and LR’s built in text parsing capabilities.

→ 4 CommentsTags:

The Red Pill or The Blue Pill?: Is It Even A Valid Question, I’ll take the Red Pill Please! by Howard Clark

February 4th, 2008 · Uncategorized

Table of contents for The Red Pill or The Blue Pill?

  1. The Red Pill or The Blue Pill?: Is It Even A Valid Question, I’ll take the Red Pill Please! by Howard Clark

This is a reference to a very popular series of movies that went by the name of “The Matrix” and it’s various sequels. The answer to this question serves as an inflection point for the main character who is given an opportunity to continue to carry on in life as he had before, or to embrace an unknown reality. This same decision is on my mind as I examine performance and automation testing infrastructure (PATI), storage concerns, and infrastructure management. I’m wondering how I might consolidate my existing hardware resources while looking to grow them going forward in a seamless and expeditious fashion. I’m examining how long it takes to provision a desktop or server, asking why I’m honestly compelled to just go and buy everything myself and expense it later assuming the risk of a capital expenditure. I’m assessing hardware platforms and OSes to arrive at a conclusion on which one yields the most bang for the buck. I’m walking down the laundry list of questions that go with the decision to virtualize, the decision to take the red pill and change the way I implement PATI from now on.

Given my bias towards the win32/win64 platform I’m going to go with VMWare, whose stock has served me well and laid the foundation for this adventure. I’ll either be selling my shares or pumping the stock on a message board in a few weeks as this plays out. But the first order of business is to actually see virtualization in action by getting Ubuntu loaded up from a pre-built image via VMWare Player and go from there. I’ve been intrigued by this OS despite not seeing any enterprise level application support for it, so this is more of a personal experiment. To provide something applicable to our craft I’ll also go with a Red Hat image to begin a comparison of load generator performance between it and a Microsoft OS offering that is comparable to a RedHat Linux Advanced Server 3.0 Red Hat distro. The hope is that this comparison will be relatively simple to execute through the wonders of a virtualized desktop. But first we’ve got to get through the installations.

Here’s to the spirit of adventure, and hoping I don’t choke on a poison pill of my own creation.

Cheers!

→ No CommentsTags:

Software Gems: Ethereal is anything but… by Howard Clark

January 30th, 2008 · Uncategorized

One definition of the word Ethereal is “lacking material substance”, the Ethereal(www.ethereal.com) network protocol analyzer however; is a true masterpiece of the open source, freeware movement.  Why it isn’t a commercial product is beyond me, maybe I’ll never understand the whole “free” software movement.  To be honest I don’t want too, and refuse to change my stance overall on the idea of using it in the enterprise.  But I will by all means use it in a non–critical capacity when I can.  It has a clean UI, decent help, and gets the job done.  While its fairly rudimentary in function, it still has great form.  Download it and have some fun, I’ve been running it at home and on the job for a few weeks and have found it to be stable and quite capable.

Couple it with your favorite diff tool(Altova’s DiffDog) and you’ve got packet level verification of your tests, which if you’ve performance tested enough will prove invaluable.  Fail to do so from time to time and you may fall victim to the notorious “false positive.”

→ 1 CommentTags: