Table of contents for From the Field
- From the Field: I Don’t Know Where I’m Going, Or How I Got Here, But I Know Where I Want To Be. by Howard Clark
- From the Field: Performance Testing Infrastructure Tip #2 by Howard Clark
- From the Field: Performance Testing Infrastructure Tip #1 by Howard Clark
- From the Field: LoadRunner and Citrix
For the first time I got performance criteria from the client that made sense in the vacuum of existing usage data. It amazes me that this is the first time this has happened in almost six years of my experience in the field. We all know by now that modeling the projected usage is an exercise in arbitrary estimates and guesses. While I’ve given you a few ways around this in the past Building Usage Profiles Without Historical Data the data point I like the most is the internal company performance objective. This kind of number resonates across the entire business and bridges the gap between the application under test and ROI. Unlike hardware based statistics, or even response times outside of a true business context, these performance goals speak to where the company would like to see itself. A model of efficiency burning on all cylinders devoid of waste or time spent doing anything other than working. Those monthly, quarterly, and yearly goals that are set by the business that get refined down to the departmental level are a key source of application performance criteria if you can map them appropriately. This data may not even make the rounds in the normal channels as your business analysts may be attempting to drill down for a specific response time measured in seconds as they glean think times from empirical data. This more often then not leads to a lot of effort for building what at best, could be called educated guesses. Assuming that a company doesn’t set goals aimed at underachievement we could also assume that those goals would provide a reasonable cushion between the current state of operations and some level above. This helps rein in our testing so we don’t over-stress or get into a situation where the test is unrealistic. Just to be clear this is being framed as the load or volume test usage model gathering activity, versus a stress or component-level test/single-node test.
“The claims department will process 10,000 claims per month with an expected staffing level consistent with the numbers coming out of fourth-quarter 2007.”
We have a five day work week, with 2 eight hour shifts, with 20 working days in a given month. At 500 potential claims a day over a 14 hour work-day as the shifts overlap that leaves is with 36 claims per hour. Assuming that the distribution is a bell shaped curve with the majority of claims processing occurring over 2 hours of that shift at say 50% we end up with 250 claims over 2 hours. At a rate of 125 per hour over that two hour window and a staff of 5 we have 25 claims per user in the course of an hour. This is a processing rate of one claim per 2.4 minutes. From our standalone baseline we know that sans think time it takes 45 seconds of request/response time to walk-through the most-efficient happy path to get a claim processed. This allows for a distribution of 99 seconds of think time throughout the script. I do not abdicate trying to get down to applying the think time on a level anymore granular than a even distribution between measured transactions in an entire business process. So, if it takes eleven steps to process a claim as defined by the business analysts and user community then that makes for a 9 second per transaction think time within the business process.
Simple math when you have an intelligent data point to start with. Try going on a dig for this type of data when you don’t have much to start with, if your client is concerned about performance testing, they more than likely have stated performance goals for the business. Even if it’s a revenue goal, a decomposition of how the client generates revenue can still lead you down to where you need to be. Top-Down thinking is a must when working with abstractions.