When it comes to software testing I have some very strong opinions about the background one should bring to the table. Not necessarily the educational background, as I’ve seen first hand how non-technical degree holders can hold their own, but really more of an emphasis on the experience they bring to the table. I was asked the other day what makes performance testing seemingly difficult or separate from more the more traditional functional based testing? What skill set is needed, what allows some to do it well and others not so well. While this is not intended to be prejudicial or discouraging I’ll defer to my favorite quote, “It is what it is!” made popular by a contestant on a cooking reality TV show(Top Chef) I hate to admit I was addicted too even though I can hardly make toast.
The key skill, or one of them, is having an inherent awareness of technology and the willingness to embrace it as both an obstacle and a tool. The very tools and platforms you trust can be your enemy and yet without them the whole idea of performing a scalability test with thousands of users is unthinkable. Being able to navigate across multiple OS, application, storage, languages and networking stacks is essential. Being able to go from high-level and conceptual ideas about how things should look, then traversing down to the low-level details is quite the exercise, and is known for it’s difficulty. Seeing the tech in the equation without being blinded by it, or biased for that matter, is the ticket. You know, sometimes a stop-watch can trump a multi-hundred-thousand dollar solution.
Another highly prized skill is situational awareness, being able to get a sense of where you fit into the mix on a project and how the activity is perceived. You’ve got to get out and touch base with your clients and keep the channels of communication open so that everyone is comfortable. Not an easy task when your work artifacts have the potential to sink a technological direction or vision of high-ranking techies.
Being able to spot the trend and/or relationships between data sets is critical to delivering the expected analysis. Visual-types will do well in this department, aside from having an idea of what data sets should relate, sometimes you have to lay all the data out and just give it a good look(Scatter plots help with this as they can visually depict where positive correlations may live, but they should be applied only when you’re trying to do this very thing, otherwise stick with a line or pie chart for everything else). You’ll want to start with a wide swath of data points versus a narrow few as defined by best practices for fear of omissions. Now vendors like Microsoft have thrown in everything including the kitchen sink when it comes to possible measurements so obviously you have to whittle that down, but do so with an open mind and also do it slowly since each application environment tends to have different concerns and hidden land mines. The same applies with the functionality you actually test, see Building Usage Profiles Without Historical Data.
The evolution I spoke of is a by-product of your experience, true some will start in the field, others will be cast into the field by necessity, and others will just fall into it. But the best route is the one less traveled and seems somewhat contradictory in most testers’ minds. To begin your testing life on the side of the enemy, I say this jokingly of course having been one, as a developer has obvious benefits. One of the things that has helped me the most is having a sense of where developers are most likely to make mistakes, and gathering intelligence on the ground during the development cycle that serves as a set of indicators for later on when testing begins. Being able to sit down and follow an architecture document, of a database schema and the SQL that drives it goes a long way. By being able to appreciate, from a practioner’s point of view, the different features and nuances of the selected application platform also helps a great deal. If I were to hand a tester a white paper on the iSpec benchmarks or the benchmarks for say a VMWARE implementation it would prove grueling, tiresome, and quite possibly boring unless you really love technology. Whereas functional testing requires a modicum of business savvy and a pinch of geek, performance testing requires a sizable helping of both. If you think performance testing is heads down, then you might possibly be a scripter or developer, missing the big picture view required to address the concerns of the project.
At the end of the day the role requires a strong and balanced approach to problem solving and a true appreciation for the domain, neither of which comes from simply reading a book, a blog, or completing a certification course.
Cheers!
It is ‘best practice’ to develop performance tests with an automated tool, such as WinRunner, so that response times from a user perspective can be measured in a repeatable manner with a high degree of precision. The same test scripts can later be re-used in a load test and the results can be compared back to the original performance tests.
On February 15, 2008, HP Software announced the end–of–support for HP WinRunner versions 7.5, 7.6, 8.0, 8.2, 9.2—all versions, all editions, suggesting migration to HP QuickTest Professional as a replacement.[