In the Back to Basics post, we defined planning with 5 key functions, and we broadly defined how each function relates to Test Planning. Let’s dive deeper into the first point – Direct and coordinate action.
For Test Planning, we can refine this point to mean that we will define the normal process of testing for the project, in a manner that will explain what testers are doing on a day-to-day basis. In some cases, you will need to adjust your plan based on your organization. However, you should also use the planning process to drive needed organizational changes, if they are required to improve your plan. Even if you don’t get the changes at first, by bringing them up early, you lay the groundwork.
“Command and control can also be viewed as the process of adapting an organization to its surroundings”
—Marine Corps Doctrinal Publication (MCDP) 5, Planning
Define the tools that will be used (and how)
The first step in test planning is determining how the tests will be executed. When planning how tests will be executed, take into account the following factors: what tools are currently in use by the organization, what tools your team has knowledge of, the complexity of the application(s) under test, the time line, maturity of the requirements, and the nature of the testing project.
Whatever you do, do not leap to a conclusion that the test project will be manual or automated, and do not immediately assume you should use whatever tools are the organizational standard. Test tools are like any other tool – you wouldn’t grab a blowtorch to hammer in a nail, and you don’t grab a hammer to weld pipe, so why would you immediately pick a tool before you’ve looked at what you’re testing?
Once you determine a tool (or no tools), give yourself flexibility to build new tools as cases warrant. Just require a short design spec on the tool (or process) that you came up with to be included in the Test Summary Report or the appropriate test case.
Example from the field:
A report generation process for a web application was originally designed so that reports were assigned to one of three reporting processes, based on when it was requested (report 1 goes to process 1, rpt2 -> proc2, rpt3 -> proc3, rpt4 -> proc1, rpt 5 -> proc 2, etc). The design flaw was that large reports would hog a process, and reports would back up in the queue behind the large report.
The new process was designed so that there was a single queue, and the reports would be picked up by the process when it was complete. Testing was asked to determine the performance of the new report generation process.
“Since this is a performance test, we’re going to use LoadRunner.”
Our first problem was that the original tester could never get LoadRunner to pick up the correct options on the window where users were to define reports. Hours of work went into try to get LoadRunner to pick up the relevant data so we could generate reports, but no one that tried could figure it out.
Since LoadRunner wasn’t working, we went back to the drawing board. We collaborated with the dev lead, and came up with an elegant solution: the developer added code to record the time the process picked up the report, then report the time the process completed, and save the difference to the database. We then took advantage of the fact that the reporting queue did not delete the report generation information – it merely flipped a flag to show it was complete. By flipping the flag back to incomplete, we were able to rerun the same report as many times as we wished. Our test effort then was simplified – all we had to do was generate a suite of reports that met our defined criteria, then create SQL queries that would flip the flag on whichever reports we needed to for each test, which sent them back through the report generation process and saved the times for us.
Once we were done, we wrote a short document explaining the code inserts and our test methodology, so that future tests could reuse our work.
Define the defect process
A proper defect process contains a defect life cycle, a triage process, and a determination of when root cause analysis is required.
The defect life cycle should determine both who will be responsible for defects throughout their life, and at what points defects should be sent to alternate parties. For example, map out when defects are assigned to training, or what will be done with Postponed defects.
A sample defect lifecycle can be seen mapped out here ( http://www.buzzle.com/editorials/4-6-2005-68177.asp), but your plan should also include processes for requirements questions/updates, root cause analyses, or training updates. For projects with a formal and informal component (such as those within regulated environments), you should have separate processes mapped for informal defects (with less stringency) and formal defects.
Define the test design methodology (scripts, charters, exploratory, a mix)
Test design methodology is as varied as the number of testers. When planning your test effort, consider the following:
- Application complexity
- Project complexity
- Tester skill level
- Reusability of test materials
- Expected requirement stability (aim low)
- Organizational standards
- Regulatory requirements and auditor expectations (for regulated environments).
Like the defect process, you may find yourself with both a formal and informal test execution phase, in which case you should feel free to have a supplementary informal test suite that will complement your formal suite. This can give you the freedom to do charter and exploratory testing, while still having formal scripts that are expected in your formal validation package.
Just like picking tools, don’t plan your methodology until you’ve taken a good look at what your project entails. If you know that you’ll be heavily reliant on staff augmentation when it’s time to execute your tests, then you will need more effort spent on scripting, since charter and exploratory tests are geared towards people who already have a base understanding of the system. Likewise, sticking to a test suite that has only test scripts in an environment with low requirement stability and high application complexity is a guaranteed way to waste a lot of time.
Define how test cycles will be scheduled, and how duration will be determined
(Definition: A test phase is used here to differentiate the test effort for one type of testing, for example, you may have a system test phase for the system itself, followed by an integration test phase for interfaces, followed by a user acceptance test phase. A test cycle is an execution of the test suite within the test phase. A test phase need not have more than one cycle, and need not have defined cycles at all.)
At the test planning stage of a new test effort (as opposed to a maintenance effort), you will probably not be able to exactly define how long test cycles and phases will be, or when they will start and end. However, you will know what phases your test effort will have, and you can define entry/exit criteria for each cycle and phase. You should also define exactly what activities will be performed in each cycle or phase. Some test phases will overlap (for example, a performance test phase often can be run concurrently with other phases), while some phases should not run until another is complete.
Test cycles can be defined as a full execution of all test cases, or as a partial execution using defined methodology (ex: all previously failed cases plus cases that are touched by code that has recently changed).
Test cycle entry/exit criteria might include:
- All test cases that have been completely designed have been run at least once, or have been flagged as being blocked by a defect.
- All returned defects from current cycle and existing cycle have been retested and either closed, forwarded, or rejected.
To avoid conclusion, you should not have multiple test cycles active at once. Testing is hard enough.
Test phase entry/exit criteria might include:
- 100% requirements depth (all requirements have been tested in some fashion – does not require passing!)
- x% pass rate or conditional pass rate – conditional passage can be used to show how many test cases have passed completely plus test cases with low severity defects.
- All high severity defects passed or accepted into the next phase.
Don’t think that you have to have everything complete in one phase to move to the next – if your organization allows it, push some activities to be done concurrently with the next phase (documentation updates, final low-severity defect retests, re-running a failed script if appropriate). The point of test phases is to define your test effort, not to be a straitjacket.
Sample Test Plan (5 phases)
- Performance Test Phase
- Entry criteria: System architecture design phase begun
- Exit criteria: All tests executed, all high severity defects resolved or accepted.
- Interface Test Phase (tests interfaces alone)
- Entry criteria: At least a draft requirements design exists
- Exit criteria: All tests executed, 100% interface requirements depth, all high severity defects resolved or accepted, all requirements documents approved.
- System Test Phase (tests the system without interfaces)
- Entry criteria: At least a draft requirements design exists
- Exit criteria: All tests executed, 100% system requirements depth, all high severity defects resolved or accepted.
- Integration Test Phase (end-to-end test with interfaces)
- Entry criteria: Interface Test Phase and System Test Phase exit criteria met, or an approved exception exists. All in-scope interfaces active. All requirements documents approved. Integration Test Suite complete and approved.
- Exit criteria: All tests executed, all high severity defects resolved or accepted. 100% integration requirements depth.
- User Acceptance Test (UAT) Phase
- Entry criteria: Interface Test Phase and System Test Phase exit criteria met, or an approved exception exists. All in-scope interfaces active. User Acceptance Test Suite complete and approved.
- Exit criteria: All tests executed, all high severity defects resolved or accepted.
A lot of the entry and exit criteria are similar here, but the important parts are that:
- We specify in which phase milestone documents must be approved.
- We require test suite completion and approval for the final two phases. The previous phases, being the first phase for their respective pieces, expect that the suite will evolve.
- The Interface and System phases are allowed to overlap, as are System and UAT. Performance is not tied to any other phase. Not every organization will allow this, but allowing phases to overlap gives you flexibility. Large test efforts may require more rigidity, to avoid confusion or overload.
No Comments so far ↓
Your comments are welcomed.