Testing is Rocket Science Not Brain Surgery

Almost all of the World's Greatest Accomplishments Were The Result of Great Planning!

Testing is Rocket Science Not Brain Surgery header image 1

A Side Note: The failure to test consistently is inexcusable by Howard Clark

October 23rd, 2007 · Uncategorized

Over time everything becomes clear, and if there is one thing I am grateful for it is the inclusion of science as a part of my academic background.  That classical training and rigor that seems so second nature to me, I normally just chalk up to being common sense, something missing in a lot of the testing that is done these days sadly.  Much like developers who on their best day are just a bunch of hacks, there is an equivalent in the testing arena.  But in performance testing there can be no leeway for haphazard, undocumented, ad-hoc, off-the-cuff, press start without knowing where you finish testing.  The reason I say this is to point out a clear difference between something that may functionally deficient versus something that doesn’t perform well.  Both are equally weighted, anyone who argues that there is a difference in the importance of correctness in both is dead wrong they are complimentary after all.  The difference lies in the way people respond to a functional failure versus a failure in a system’s performance due to the way these errors manifest themselves.  It’s a play on the psychology of the user, where an error message associated with a functional defect gives a user something to go on, at some level the user says ok, I can send this to the developers and they can fix it.  A user can become accustomed to a certain error rate and develop workarounds until the issue is mitigated.  But when an application fails due to a lack of capacity the user tends to get a very foreign response or nothing at all, resulting in the under the breath cursing that describes anything that appears to not work at all. [Read more →]

→ No CommentsTags:

A Side Note: Why I dislike XY Scatter Charts so much! by Howard Clark

October 3rd, 2007 · Uncategorized

Honestly, there isn’t anything wrong with the Scatter Chart in and of itself.  It just wants a little respect versus its cousin the line chart.  I’m making the line chart a cousin, maybe even distant to drive home the point that while they are both in the same family they have completely different operating parameters.  What happens is that Scatter Charts are misunderstood and misapplied.  I was going to go on this long diatribe about what’s what and then I decided to lean on my friends at Microsoft since for most of us Excel is the tool of choice for data visualization.  Hundreds of data visualization packages and we’re doing imports into Excel, got to love the power of the swamis at Microsoft, oh what a spell they weave.  In any case here is the link.

 Creating XY (Scatter) and Line charts

My final thought is this, if you are looking at data points over a very linear period of measure like, oh, TIME.  Use a Line Chart, unless you have a thing for lots of little dots.

→ No CommentsTags:

Performance Wilderness: Look up on A Summer Night and See The Stars by Howard Clark

September 10th, 2007 · Uncategorized

Table of contents for Performance Wilderness

  1. Performance Wilderness: Look up on A Summer Night and See The Stars by Howard Clark
  2. Performance Wilderness:Setting Up Your Performance Testing Camps by Howard Clark
  3. Performance Testing Wilderness: Hold Hands and Make a Ring Around the Campfire by Howard Clark

Away from the city lights and from the smoggy haze of the freeways there is a sky full of millions of stars visible by the naked eye.  These stars that fill the sky have the distinction of being brighter than their millions of other brother and sister stars they share the sky with.  Among these over-achievers are a few that shine even brighter, usually found in our constellations because of their brilliance.  This is analogous to all of the possibilities in a newly minted system you are being asked to test.  Like those stars in the sky some of those possibilities will show themselves as being more brilliant or more critical to the success of the application.  For me these mission critical business functions kind of pop out, things that involve integration touch points to external systems, queries that work against large quantities of data or what will become large data sets over time, queries involving extensive filtering, areas that require a lot of logical branching to create or process requests and responses, etc..  Then compound this with the needs of the presentation layer with streaming, AJAX, dynamic content presentation where what you see is contingent on business rule processing, and data grids. [Read more →]

→ No CommentsTags:

A Side Note:Get Ahead of the Testing Infrastucture Build Out by Howard Clark

August 13th, 2007 · Uncategorized

Know what it is and how it is you’re going to test it far in advance of building out your testing capacity and before you’ve exhausted it.  It shouldn’t take a system collapse to justify more funding for your testing infrastructure.  Instead we have to learn to build the cost justifications and speak to the ramifications of failing to expend effort more convincingly to begin with.  We should all know the graph that shows us that costs increase, as defects are discovered later and later in the deployment process.  So start wrapping that case up with past experiences and the numbers to support it.  Enable the organization with hard facts about what it means to deliver poor performing products that don’t meet the functional requirements.  Don’t just pay lip service to the idea, make the case for it and we’ll all be better off.

→ No CommentsTags:

Performance Wilderness:Setting Up Your Performance Testing Camps by Howard Clark

August 13th, 2007 · Uncategorized

Table of contents for Performance Wilderness

  1. Performance Wilderness: Look up on A Summer Night and See The Stars by Howard Clark
  2. Performance Wilderness:Setting Up Your Performance Testing Camps by Howard Clark
  3. Performance Testing Wilderness: Hold Hands and Make a Ring Around the Campfire by Howard Clark

In the first camp we have test efforts intended to be highly repeatable, obviously repetitive, and biased towards being regressive.  The performance testing infrastructure should be partitioned in such a way that these efforts are not compromised by the testing being conducted in the other camps.  These testing efforts should lend themselves to being highly predictable as far as scheduling, and a thorough planning session should precede each one where the schedule has taken testing concerns and code deployments into account from an enterprise point of view over a long-term time horizon.  This camp should be entirely devoid of the “mad scramble” for testing resources and free of the missteps that end up requiring follow-on or contingent test executions.  An effort to right-size this environment is especially important as these resources may not be utilized regularly and may suffer from the inclination to re-allocate and re-deploy them.  The idea of this camp in the testing infrastructure is that it has a known capacity and remains allocated for this purpose and only this purpose. [Read more →]

→ No CommentsTags: