...
2.3 Risks & Contingencies
The success of this testing effort is dependant on the following:following risks have been identified, which may impact the testing effort.
Risk | Contingency |
Production-like test environment not available | Utilize development or production environment. Results may not be indicative of production and therefore cannot be used as a benchmark. Production performance issues may not be identified during testing. |
Production-like setup and settings not available. | Use the closest setup and settings we can. Results may not be indicative of production and therefore cannot be used as a benchmark. Production performance issues may not be identified during testing. |
Fully operational test tools not available. | Wait until the test tools are availalbe or find and use another test tool(s). This will extend the time required to perform testing. |
Test time increases due to changes in scope requiring additional test analysis and/or test case creation | If test time cannot be increased, reduce/cut performance testing scenarios and execute highest priority scenarios initially followed by lower priority tests until test time runs out |
Involvement of subject matter experts (SMEs) for all stages of the testing effort |
...
not sufficient. | If test time cannot be increased, reduce/cut performance testing scenarios and execute highest priority scenarios initially followed by lower priority tests until test time runs out |
Inadequate Non-functional Requirements | Missing pass/fail criteria invalidates benchmarking. Missing load modeling invalidates all scenarios. Perform only a brute stress test to try and flush out major bottlenecks and functionality under load issues. Additionally an endurance tet can be run to attempt to identify memory leaks. All tests will be less indicative of real world usage scenarios. |
Insufficient access to systems in order monitor (This includes any necessary server side scripts which may need to be developed in order to capture desired metrics.) |
If any of the above items are not available the testing effort will not reach all of its goals. If these items are latent the testing effort will take longer than expected.
...
Root cause analysis will be difficult is possible. Testing time will most likely need to be extended and scenarios may be abbreviated due to time constraints. |
Substantial issue(s) which requires significant modifications to the application or re-configuration of the system |
...
are encountered. | Some testing may need to be re-done, possibly including re-scripting etc. This would extend testing time. |
Excessive number of bottlenecks encountered and/or issue correction time. | Extend testing time. |
Test time increases due to changes in scope requiring additional test analysis and/or test script/scenario creation | If test time cannot be increased, reevaluate priorities and risk and test according to new priorities. |
3.0 Approach
3.1 Testing Strategy
...
- Stable production like system to test against.
- Stable hardware and software to use to generate load.
- Adequate rights and privileges to capture server side metrics (monitoring) as well as any server side scripts necessary to accomplish any needed monitoring.
4.0 Scripts
The following scripts will be used during the performance testing effort. When the design steps have been provided by MIT all of the to be determined (TBD) values will be replaced with the actual values.
4.1 CAMS Account Creation
...
Touchstone Production Physical Architecture
8.2 IdPi Logical
Touchstone Production IdPi Logical Architecture