Software Testing

March 29, 2018 | Author: Chaitanya Babu Ravella | Category: Software Testing, Software Development Process, Unit Testing, Software, Specification (Technical Standard)


Comments



Description

Software TestingConfidential Cognizant Technology Solutions Table of Contents 1 INTRODUCTION TO SOFTWARE..........................................................................................7 1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE...........................................................................7 1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE.......................................................7 1.3 BROAD CATEGORIES OF TESTING....................................................................................................8 1.4 WIDELY EMPLOYED TYPES OF TESTING ...........................................................................................8 1.5 THE TESTING TECHNIQUES.............................................................................................................9 1.6 CHAPTER SUMMARY....................................................................................................................10 2 BLACK BOX AND WHITE BOX TESTING..........................................................................11 2.1 INTRODUCTION............................................................................................................................11 2.2 BLACK BOX TESTING...................................................................................................................11 2.3 TESTING STRATEGIES/TECHNIQUES................................................................................................13 2.4 BLACK BOX TESTING METHODS.....................................................................................................14 2.5 BLACK BOX (VS) WHITE BOX....................................................................................................17 2.6 WHITE BOX TESTING........................................................................................................20 3 GUI TESTING............................................................................................................................26 3.1 SECTION 1 - WINDOWS COMPLIANCE TESTING...............................................................................26 3.2 SECTION 2 - SCREEN VALIDATION CHECKLIST................................................................................28 3.3 SPECIFIC FIELD TESTS.................................................................................................................33 3.4 VALIDATION TESTING - STANDARD ACTIONS..................................................................................34 4 REGRESSION TESTING..........................................................................................................38 4.1 WHAT IS REGRESSION TESTING.....................................................................................................38 4.2 TEST EXECUTION .......................................................................................................................39 4.3 CHANGE REQUEST......................................................................................................................40 4.4 BUG TRACKING .........................................................................................................................41 4.5 TRACEABILITY MATRIX...............................................................................................................42 5 PHASES OF TESTING..............................................................................................................45 5.1 INTRODUCTION ...........................................................................................................................45 5.2 TYPES AND PHASES OF TESTING....................................................................................................45 5.3 THE “V”MODEL........................................................................................................................46 ........................................................................................................................................................48 6 INTEGRATION TESTING.......................................................................................................49 6.1 GENERALIZATION OF MODULE TESTING CRITERIA..............................................................................50 .........................................................................................................................................................52 7 ACCEPTANCE TESTING........................................................................................................55 7.1 INTRODUCTION – ACCEPTANCE TESTING........................................................................................55 7.2 FACTORS INFLUENCING ACCEPTANCE TESTING................................................................................55 7.3 CONCLUSION..............................................................................................................................57 8 SYSTEM TESTING....................................................................................................................58 8.1 INTRODUCTION TO SYSTEM TESTING....................................................................................58 8.2 NEED FOR SYSTEM TESTING ........................................................................................................59 Performance Testing Process & Methodology 2Proprietary & Confidential - 8.3 SYSTEM TESTING TECHNIQUES .....................................................................................................59 8.4 FUNCTIONAL TECHNIQUES.............................................................................................................60 8.5 CONCLUSION:.............................................................................................................................61 9 UNIT TESTING.........................................................................................................................62 9.1 INTRODUCTION TO UNIT TESTING..................................................................................................62 9.2 UNIT TESTING –FLOW:...............................................................................................................64 1 RESULTS.....................................................................................................................................64 UNIT TESTING – BLACK BOX APPROACH..........................................................................................64 UNIT TESTING – WHITE BOX APPROACH...........................................................................................64 UNIT TESTING – FIELD LEVEL CHECKS...................................................................................65 UNIT TESTING – FIELD LEVEL VALIDATIONS....................................................................................65 UNIT TESTING – USER INTERFACE CHECKS.........................................................................................65 9.3 EXECUTION OF UNIT TESTS..........................................................................................................65 UNIT TESTING FLOW :.....................................................................................................................66 DISADVANTAGE OF UNIT TESTING............................................................................................67 METHOD FOR STATEMENT COVERAGE.................................................................................................68 RACE COVERAGE...................................................................................................................69 9.4 CONCLUSION..............................................................................................................................69 10 TEST STRATEGY....................................................................................................................71 10.1 INTRODUCTION .........................................................................................................................71 10.2 KEY ELEMENTS OF TEST MANAGEMENT:......................................................................................71 10.3 TEST STRATEGY FLOW :............................................................................................................72 10.4 GENERAL TESTING STRATEGIES..................................................................................................74 10.5 NEED FOR TEST STRATEGY........................................................................................................74 10.6 DEVELOPING A TEST STRATEGY..................................................................................................75 10.7 CONCLUSION:...........................................................................................................................76 11 TEST PLAN...............................................................................................................................77 11.1 WHAT IS A TEST PLAN?............................................................................................................77 CONTENTS OF A TEST PLAN..............................................................................................................77 11.2 CONTENTS (IN DETAIL)..............................................................................................................78 12 TEST DATA PREPARATION - INTRODUCTION.............................................................80 12.1 CRITERIA FOR TEST DATA COLLECTION ......................................................................................82 12.2 CLASSIFICATION OF TEST DATA TYPES........................................................................................90 12.3 ORGANIZING THE DATA..............................................................................................................91 12.4 DATA LOAD AND DATA MAINTENANCE.......................................................................................94 12.5 TESTING THE DATA..................................................................................................................95 12.6 CONCLUSION............................................................................................................................96 13 TEST LOGS - INTRODUCTION ..........................................................................................97 13.1 FACTORS DEFINING THE TEST LOG GENERATION.........................................................................97 13.2 COLLECTING STATUS DATA......................................................................................................99 14 TEST REPORT......................................................................................................................106 14.1 EXECUTIVE SUMMARY.............................................................................................................106 Performance Testing Process & Methodology 3Proprietary & Confidential - 15 DEFECT MANAGEMENT...................................................................................................109 15.1 DEFECT.................................................................................................................................109 15.2 DEFECT FUNDAMENTALS .........................................................................................................109 15.3 DEFECT TRACKING.................................................................................................................110 15.4 DEFECT CLASSIFICATION..........................................................................................................111 15.5 DEFECT REPORTING GUIDELINES...............................................................................................113 16 AUTOMATION......................................................................................................................117 16.1 WHY AUTOMATE THE TESTING PROCESS?..................................................................................117 16.2 AUTOMATION LIFE CYCLE.......................................................................................................119 16.3 PREPARING THE TEST ENVIRONMENT.........................................................................................122 16.4 AUTOMATION METHODS..........................................................................................................126 17 GENERAL AUTOMATION TOOL COMPARISON........................................................129 17.1 FUNCTIONAL TEST TOOL MATRIX.............................................................................................129 17.2 RECORD AND PLAYBACK..........................................................................................................129 17.3 WEB TESTING........................................................................................................................130 17.4 DATABASE TESTS...................................................................................................................131 17.5 DATA FUNCTIONS...................................................................................................................131 17.6 OBJECT MAPPING...................................................................................................................132 17.7 IMAGE TESTING......................................................................................................................132 17.8 TEST/ERROR RECOVERY...........................................................................................................133 17.9 OBJECT NAME MAP................................................................................................................133 17.10 OBJECT IDENTITY TOOL.........................................................................................................134 17.11 EXTENSIBLE LANGUAGE.........................................................................................................134 17.12 ENVIRONMENT SUPPORT........................................................................................................135 17.13 INTEGRATION........................................................................................................................135 17.14 COST..................................................................................................................................136 17.15 EASE OF USE......................................................................................................................136 17.16 SUPPORT..............................................................................................................................137 17.17 OBJECT TESTS......................................................................................................................137 17.18 MATRIX..............................................................................................................................137 17.19 MATRIX SCORE.....................................................................................................................138 18 SAMPLE TEST AUTOMATION TOOL.............................................................................139 18.1 RATIONAL SUITE OF TOOLS .....................................................................................................139 18.2 RATIONAL ADMINISTRATOR......................................................................................................140 18.3 RATIONAL ROBOT...................................................................................................................144 18.4 ROBOT LOGIN WINDOW............................................................................................................145 18.5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT............................................................................146 18.6 RECORD AND PLAYBACK OPTIONS.............................................................................................148 18.7 VERIFICATION POINTS..............................................................................................................150 18.8 ABOUT SQABASIC HEADER FILES...........................................................................................152 18.9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE...............................................................152 18.10 INSERTING A COMMENT INTO A GUI SCRIPT:...........................................................................152 18.11 ABOUT DATA POOLS.............................................................................................................153 18.12 DEBUG MENU.......................................................................................................................154 18.13 COMPILING THE SCRIPT..........................................................................................................155 18.14 COMPILATION ERRORS............................................................................................................156 Performance Testing Process & Methodology 4Proprietary & Confidential - ................1 CLIENT SIDE STATISTICS..................................................................3 NETWORK STATISTICS..........................................................................................................................................................174 23 TOOLS..............2 SERVER SIDE STATISTICS..........................................................................................1 SYSTEM ANALYSIS....................170 22......3 ARCHITECTURE BENCHMARKING......................................................................................................3 WEB BROWSERS........................................190 26 LOAD TESTING PROCESS......................................................4 PERFORMANCE MONITORING....................................2 PHASE 2 – TEST PLAN..........................................................................161 21.................................................................................................................................................3 SETTINGS..................................................192 Performance Testing Process & Methodology 5Proprietary & Confidential - ....................................................................................................................................2 USER SCRIPTS...............................................................................................................................................171 22..........172 22.............................6 PHASE 6 – TEST ANALYSIS.....................................190 25..............................................................................................................................190 25...............................................................9 BENCHMARKING LESSONS ...................160 20.............................................................................................160 20................................................186 24 PERFORMANCE METRICS......171 22.160 21 PERFORMANCE TESTING.....160 20......................................................................................................................................161 21..............................................................................................4 CONCLUSION.........................5 PHASE 5 – TEST EXECUTION...................................2 WHEN SHOULD LOAD TESTING BE DONE?........................................................................................................4 PRE-REQUISITES FOR PERFORMANCE TESTING..........................................2 WEBLOAD 4...........................................160 20..........................................................................................................................................................................................1 OPERATING SYSTEM...............................191 26.............187 24......................................188 25 LOAD TESTING.......................................................................................191 26................1 LOADRUNNER 6.............................................5.........................................................................................188 24..................160 20............177 23.....................................185 23................................................................................157 19.........................4 GENERAL TESTS..............188 24......................................................................................................................5.....................................................................161 21...162 21...........................................................................................................................................................................191 26................................................................................................163 21.......................................................................169 22........158 20 SUPPORTED ENVIRONMENTS.................................................................................................................................................................3 PHASE 3 – TEST DESIGN..........................................................174 22......177 23..............................................2 WHY PERFORMANCE TESTING?....1 WHAT IS PERFORMANCE TESTING?.....................................................5 DEVELOPMENT ENVIRONMENTS...............................................................3 PERFORMANCE TESTING OBJECTIVES.....................2 PROTOCOLS....177 23..5 PERFORMANCE REQUIREMENTS......................................168 22.........................19 RATIONAL TEST MANAGER..................1 PHASE 1 – REQUIREMENTS STUDY...........166 22.................................191 26......................................................................................................7 PHASE 7 – PREPARATION OF REPORTS...................................................1 WHY IS LOAD TESTING IMPORTANT ?...................................................1 TEST MANAGER-RESULTS SCREEN.....................................172 22...............8 COMMON MISTAKES IN PERFORMANCE TESTING...................................................................................................................................................................4 PHASE 4 –SCRIPTING..................164 22 PERFORMANCE TESTING PROCESS..........188 24....................4 MARKUP LANGUAGES.............................................. ......................203 28....................................................................................................................................................6 CONCLUSION..................195 27....................................203 28..............................................................................................................................................................208 Performance Testing Process & Methodology 6- Proprietary & Confidential - ..1 TEST COVERAGE.............................................26..........202 28.........................................................2 BACKGROUND TO AUTOMATED STRESS TESTING................................................199 27....................194 27.........203 28.....................192 27 STRESS TESTING....................1 INTRODUCTION TO STRESS TESTING...........7 TECHNIQUES USED TO ISOLATE DEFECTS.....................................................................4 LINE-LEVEL TEST COVERAGE.................................5 ANALYZING RESULTS..............................................................................................................................6 HOW TEST COVERAGE TOOLS WORK..................197 27.............206 29....................................................................................................................................................206 29...................3 CHAPTER SUMMARY......................................6 DATA FLOW DIAGRAM.................................200 28 TEST CASE COVERAGE..........199 27............................................................................198 27......................................206 29...........3 PROCEDURE-LEVEL TEST COVERAGE.................................................................................................................................5 CONDITION COVERAGE AND OTHER MEASURES..................................................4 PROGRAMMABLE INTERFACES.........194 27.202 28...........................................................205 29 TEST CASE POINTS-TCP.........................................................................................................................................202 28...................5 GRAPHICAL USER INTERFACES....................................................2 TEST COVERAGE MEASURES............................................................1 WHAT IS A TEST CASE POINT (TCP).....................................................3 AUTOMATED STRESS TESTING IMPLEMENTATION.............192 26..................................................................................................................203 28..............................................................................2 CALCULATING THE TEST CASE POINTS:............7 TEST COVERAGE TOOLS AT A GLANCE................................................................................... Test Environment setup 4. software testing began to evolve as a technical discipline.Test Design 3.Test Execution 5. protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice!. testing started to be considered a separate activity from debugging. The waterfall model is as given below 1.Test Strategy & Planning 2. which says. faster and cost-effective software. In the 1950’s when Machine languages were used.1 Introduction to Software Evolution of the Software Testing discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. “Testing is the process of executing a program with the intent of finding errors” 1. The term software engineering was first used at a 1968 NATO workshop in West Germany. testing is nothing but debugging. compilers were developed. started way back when most of today’s software testers were not even born! The attitude towards Software Testing underwent a major positive change in the recent years. In the 1970’s when the software engineering concepts were introduced. Now to answer.Defect Analysis & Tracking 6. high costs etc. reliability.Final Reporting Performance Testing Process & Methodology 7Proprietary & Confidential - . It focused on the growing software crisis! Thus we see that the software crisis on quality.2 The Testing process and the Software Testing Life Cycle Every testing project has to follow the waterfall model of the testing process. Also there has been a growing interest in software safety. Over the last two decades there has been an increased focus on better.1 1. When in the 1960’s. “What is Testing?” we can go by the famous definition of Myers. Integration testing. we see that are various levels or phases of testing. System testing. we see that there are two categories of testing activities that can be done on software. User Acceptance testing etc. design review. there needs to be testing done on every phase. Involving software testing in all phases of the software development life cycle has become a necessity as part of the software quality assurance process.  Static Testing  Dynamic Testing The kind of verification we do on the software work products before the process of compilation and creation of an executable is more of Requirement review. namely. code review.STLC 1. Unit testing.4 Widely employed Types of Testing From the V-model. Let us see a brief definition on the widely employed types of testing. Right from the Requirements study till the implementation. walkthrough and audits.3 Broad Categories of Testing Based on the V-Model mentioned above. Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing.According to the respective projects. Performance Testing Process & Methodology 8Proprietary & Confidential - . but the process mentioned above is common to any testing activity. Requirement Study High Level Design Low Level Design Unit Testing Production Verification Testing User Acceptance Testing System Testing Integration Testing SDLC . When we test the software by executing and comparing the actual & expected results. the scope of testing can be tailored. namely. This type of testing is called Static Testing. it is called Dynamic Testing 1. Unit Testing: The testing done to a unit or to a smallest piece of software. verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system.e. Performance Testing: To evaluate the time taken or response time of the system to perform it’s required functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space.5 The Testing Techniques Performance Testing Process & Methodology 9- Proprietary & Confidential - . integrated) to form higher-level elements Regression Testing: Selective re-testing of a system to verify the modification (bug fixes) have not caused unintended effects and that system still complies with its specified requirements System Testing: Testing the software for the required specifications on the intended hardware Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria. Done to verify if it satisfies its functional specification or its intended design structure. memory. a subset of stress testing. 1. which enables a customer to determine whether to accept the system or not.. Integration Testing: Testing which takes place as sub elements are combined (i. processor utilization) to ensure the system do not break unexpectedly Load Testing: Load Testing. White-Box testing technique: This technique us used for testing based on analysis of internal logic (design. code.).6 Chapter Summary This chapter covered the Introduction and basics of software testing mentioning about  Evolution of Software Testing  The Testing process and lifecycle  Broad categories of testing  Widely employed Types of Testing  The Testing Techniques Performance Testing Process & Methodology 10 - Proprietary & Confidential - .)(But expected results still come requirements). The above said testing types are performed based on the following testing techniques. etc. Also known as Structural testing.To perform these types of testing. Also known as functional testing. These topics will be elaborated in the coming chapters 1. there are two widely used testing techniques. user documentation. Black-Box testing technique: This technique is used for testing based solely on analysis of requirements (specification. It is used to detect errors by means of execution-oriented test cases. White-box test design allows one to peek inside the "box". test coverage. test groups are often used. For this reason. There are 2 primary methods by which tests can be designed and they are: BLACK BOX WHITE BOX Black-box test design treats the system as a literal "black-box". Synonyms for black-box include: behavioral. no other knowledge of the program is necessary.1 Introduction Test Design refers to understanding the sources of test cases. While black-box and white-box are terms that are still in popular use. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. and closed-box. Synonyms for white-box include: structural. avoiding programmer bias toward his own work. It is usually described as focusing on testing functional requirements. It is because of this that black box testing can be considered testing with respect to the specifications. it hasn't proven useful to use a single test design method. when black box testing is applied to software engineering. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden. the tester would only know the "legal" inputs and what the expected outputs should be. functional. and how to build and maintain test data. For example.2 Black box testing Black Box Testing is testing without knowledge of the internal workings of the item being tested.2 Black Box and White Box testing 2. glass-box and clear-box. many people prefer the terms "behavioral" and "structural". so it doesn't explicitly use knowledge of the internal structure. Performance Testing Process & Methodology 11 Proprietary & Confidential - . but not how the program actually arrives at those outputs. but it's still discouraged. In practice. how to develop and document test cases. and it focuses specifically on using internal knowledge of the software to guide the selection of test data. the tester and the programmer can be independent of one another. opaque-box. Some call this "gray-box" or "translucent-box" test design. but others wish we'd stop talking about boxes altogether!!! 2. For this testing. Performance Testing Process & Methodology 12 - Proprietary & Confidential - . During a stress test.1. In the following the most important aspects of these black box tests will be described briefly. The notion of benchmark tests involves the testing of program efficiency. Does the system provide possibilities to recover all of the data or part of it? How much can be recovered and how? Is the recovered data still correct and consistent? Particularly for software that needs high reliability standards. field and laboratory tests. incorrect buffer sizes. volume tests. benchmark tests only denote operations that are independent of personal variables.e. some also consider user tests that compare the efficiency of different software systems as benchmark tests.2.without user involvement The so-called ``functionality testing'' is central to most testing exercises. In the context of this document. Among the most important black box tests that do not involve users are functionality testing. The other is to test module by module. or only show that an error message would be needed telling the user that the system cannot process the given amount of data. recovery testing is very important. One is the testing of each program feature or function in sequence. i. sending e-mails. the system has to process a huge amount of data or perform many function calls within a short period of time. each function where it is called first. stress tests. The objective of volume tests is to find the limitations of the software by processing a huge amount of data. i. There are different approaches to functionality testing. Additionally.1 Black box testing . e. The aim of recovery testing is to make sure to which extent data can be recovered after a system breakdown.g. in the NLP area. and benchmarks . i. to modify a term bank via different terminals simultaneously). however. Its primary objective is to assess whether the program does what it is supposed to do. a consumption of too much memory space. black box tests do not necessarily involve the participation of users. there are two types of black box test that involve users. 2. or. The efficiency of a piece of software strongly depends on the hardware environment and therefore benchmark tests always consider the soft/hardware combination.g. A typical example could be to perform the same function from all workstations connected in a LAN within a short period of time (e. recovery testing.e. Whereas for most software engineers benchmark tests are concerned with the quantitative measurement of specific operations. what is specified in the requirements.e.Though centered around the knowledge of user requirements. A volume test can uncover problems that are related to the efficiency of a system. 3 • • • • Testing Strategies/Techniques Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester). E. Apart from general usability-related aspects. field tests are the only real means to elucidate problems of the organisational integration of the software system into existing procedures. last but not least. to eliminate any guess work by the tester as to the methods of the function Data outside of the specified input range should be tested to check the robustness of the program Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output The number zero should be tested when numerical data is to be input Proprietary & Confidential - Performance Testing Process & Methodology 13 - . methodological considerations are rare in SE literature.2 Black box testing . 2.2. data collection and analysis are easier than for field tests. Rather.g. Since laboratory tests provide testers with many technical possibilities. that neither source texts nor target texts are properly organised and stored and. In the following only a rough description of field and laboratory tests will be given. A scenario test is a test case which aims at a realistic user background for the evaluation of software as it was defined and performed It is an instance of black box testing where the major objective is to assess the suitability of a software product for every-day routines. i. In field tests users are observed while using the software system at their normal working place. Moreover.1. performing a standardised task. Laboratory tests are mostly performed to assess the general usability of the system. The term ``scenario'' has entered software evaluation in the early 1990s . how the technical integration of the system works. In short it involves putting the system into its intended use by its envisaged type of user. but the fact that many clients still submit their orders as print-out. one may find practical test reports that distinguish roughly between field and laboratory tests. individual translators are not too motivated to change their working habits.e. Due to the high laboratory equipment costs laboratory tests are mostly only performed at big software houses such as IBM or Microsoft.with user involvement For tests involving users. where the major implementation problem is not the technical environment. field tests are particularly useful for assessing the interoperability of the software system. Particularly in the NLP environment this problem has frequently been underestimated.2. A typical example of the organisational problem of implementing a translation memory is the language service of a big automobile manufacturer. Scenario Tests. Additional tests for data-flow coverage as needed. Domain tests not covered by the above. logic testing. Special techniques as appropriate-syntax.• • • • • • Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity). domain testing. 6. 2. state. 4. especially with real time systems Crash testing should be performed to see what it takes to bring the system down Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance Other functional testing techniques include: transaction testing. 2.1 • Black box testing Methods Graph-based Testing Methods Black-box methods based on the nature of the relationships (links) among the program objects (nodes). Any dirty tests not covered by the above. Finite state machine models can be used as a guide to design functional tests According to Beizer the following is a general order by which tests should be designed: 1. as needed. syntax testing. etc. loop. test cases are designed to traverse the entire graph Transaction flow testing (nodes represent steps in some transaction and links represent logical connections between steps that need to be validated) Finite state modeling (nodes represent user observable states of the software and links represent transitions between states) Data flow modeling (nodes are data objects and links are transformations from one data object to another) Proprietary & Confidential - • • • Performance Testing Process & Methodology 14 - . Additional structural tests for branch coverage. 5.4.4 2. and state testing. Clean tests against requirements. 3. one valid and two invalid equivalence classes are defined 3. as well as values just above and just below the minimum and maximum values 3. test cases should be designed to produce the minimum and maxim output reports 4. test cases should include a and b. link weights are required execution times) Equivalence Partitioning Black-box technique that divides the input domain into classes of data from which test cases can be derived An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed before a general error is observed Equivalence class guidelines: 1.2 • • • 2.• Timing modeling (nodes are program objects and links are sequential connections between these objects.4 • Comparison Testing Black-box testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications Proprietary & Confidential - Performance Testing Process & Methodology 15 - . If input condition specifies a range bounded by values a and b. If input condition specifies a range. one valid and one invalid equivalence class is defined 4. one valid and one invalid equivalence class is defined 2. test cases should be exercise the minimum and maximum numbers. Apply guidelines 1 and 2 to output conditions. If an input condition requires a specific value.4. If an input condition specifies and number of values. If an input condition is Boolean.3 • Boundary Value Analysis Black-box technique that focuses on the boundaries of the input domain rather than its center • BVA guidelines: 1.4.g. size limitations). If an input condition specifies a member of a set. If internal program data structures have boundaries (e.4. be certain to test the boundaries 2. values just above and just below a and b 2. one valid and two invalid equivalence classes are defined 2. 4.4. 2. Detect and isolate all single mode faults 2.5 • • • 2. Task testing (test each time dependent task independently) Behavioral testing (simulate system response to external events) Intertask testing (check communications errors among tasks) System testing (check interaction of integrated system software and hardware) 2. 3. 4. test cases are hard to design There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried Proprietary & Confidential - 2.6 • • • • Specialized Testing Graphical user interfaces Client/server architectures Documentation and help facilities Real-time systems 1. Multimode faults 2. Detect all double mode faults 3.4. to test every possible input stream would take nearly forever Without clear and concise specifications.• Often equivalence class partitioning is used to develop a common set of test cases for each implementation Orthogonal Array Testing Black-box technique that enables the design of a reasonably small set of test cases that provide maximum test coverage Focus is on categories of faulty logic likely to be present in the software component (without examining the code) Priorities for assessing tests using an orthogonal array 1.7 • • • • • Advantages of Black Box Testing More effective on larger units of code than glass box testing Tester needs no knowledge of implementation.8 • • • Performance Testing Process & Methodology 16 - .4. including specific programming languages Tester and programmer are independent of each other Tests are done from a user's point of view Will help to expose any ambiguities or inconsistencies in the specifications • Test cases can be designed as soon as the specifications are complete Disadvantages of Black Box Testing Only a small number of possible inputs can actually be tested. You have to see if it works just by flipping switches (inputs) and seeing what happens to the lights and dials (outputs).People(who does 2.• May leave many program paths untested Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) • Most testing related research has been directed toward glass box testing • 2. and you can’t see beyond its surface. and dials on the outside. You must test it without opening it up. consider the Five-Fold Testing System. apply probes internally and maybe even disassemble parts of it. depends on how you define the boundary of the box and what kind of access the “blackness” is blocking. however. this is called white box testing. This is black box testing. The actual meaning of the metaphor. It’s housed in a black box with lights. yet everyone seems to have a different idea of what they mean.Activities(how you 5. To help understand the different ways that software testing can be divided between black box and white box techniques. By analogy. Imagine you’re testing an electronics system. Risks (why you 4. switches. It lays out five dimensions that can be used for examining testing: 1. These terms are commonly used. see how the circuits are wired.5 Black Box (Vs) White Box An easy way to start up a debate in a software testing forum is to ask the difference between black box and white box testing. Black box testing begins with a metaphor. Evaluation (how you know you’ve found a bug) the gets are are testing) tested) testing) testing) Let’s use this system to understand and clarify the characteristics of black box and white box testing. Black box software testing is doing the same thing. An opposite test approach would be to open up the electronics system. but with software. Coverage (what 3. People: Who does the testing? Performance Testing Process & Methodology 17 - Proprietary & Confidential - . Effective security testing also requires a detailed understanding of the code and the system architecture. Risks: Why are you testing? Sometimes testing is targeted at particular risks. Another set of risks concerns whether the software will actually provide value to users. which defines tests based on the code itself. Another activity-based distinction contrasts dynamic test execution with formal code inspection. and could be termed “black box. which defines tests based on functional requirements. Accordingly. These are two design approaches. Since behavioral testing is based on external functional definition. These are the two most commonly used coverage criteria. Another is to contrast testing that aims to cover all the requirements with testing that aims to cover all the code. Developer testing is called “white box” testing. Requirements-based testing could be called “black box” because it makes sure that all the customer requirements have been verified. Thus. Code-based testing is often called “white box” because it makes sure that all the code (the statements. Usability testing focuses on this risk. it is often called “black box. In this case. the metaphor maps test execution (dynamic testing) with black box testing. Boundary testing and other attack-based techniques are targeted at common coding errors. Both are supported by extensive literature and commercial tools.Some people know how software works (developers) and others just use it (users). “black box” testing becomes another name for system testing.” while structural testing—based on the code internals—is called “white box. Testing is then categorized based on the types of tools used. Some tool vendors refer to code-coverage tools as white box tools. this is probably the most commonly cited definition for black box and white box testing. We could also focus on the tools used. Coverage: What is tested? If we draw the box around the system as a whole. and structural test design. This is one way to think about coverage. And testing the units inside the box becomes white box testing. or decisions) is exercised. The distinction here is based on what the person knows or can understand.” Indeed. these techniques might be classified as “white box”. and tools that facilitate applying inputs and capturing inputs—most notably GUI capture replay tools—as black box tools. any testing by users or other non-developers is sometimes called “black box” testing. Performance Testing Process & Methodology 18 Proprietary & Confidential - . and maps code inspection (static testing) with white box testing.” Activities: How do you test? A common distinction is made between behavioral test design. paths. These contrast with black box techniques that simply look at the official outputs of a program. Memory leaks and wild pointers are examples. the integration of CASE tools. Thus black box testing is testing against the specification and will discover faults of omission. and last but not least the involvement of users in both software development and testing procedures Performance Testing Process & Methodology 19 Proprietary & Confidential - . indicating that part of the implementation is faulty. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. it cannot guarantee that the complete specification has been implemented. Assertions are another technique for helping to make problems more visible. White box testing is testing against the implementation and will discover faults of commission. Related techniques capture code history and stack information when faults occur. The paths should then be checked against the black box test plan and any additional required test runs determined and applied. indicating that part of the specification has not been fulfilled. A failure of a white box test may result in a change which requires all black box testing to be repeated and the re-determination of the white box paths To conclude.Evaluation: How do you know if you’ve found a bug? There are certain kinds of software faults that don’t always lead to obvious failures. The consequences of test failure at this stage may be very expensive. helping with diagnosis. since they use code instrumentation to make the internal workings of the software more visible. rapid prototyping. Certain test techniques seek to make these kinds of problems more visible. there are further constructive means to guarantee high quality software end products. In order to fully test a software product both black and white box testing are required. with the production of flowgraphs and determination of paths. The advice given is to start test planning with a black box test approach as soon as the specification is available. it cannot guarantee that all parts of the implementation have been tested. All of these techniques could be considered white box test techniques. White box testing is concerned only with testing the software product. Among the most important constructive means are the usage of object-oriented programming tools. apart from the above described analytical methods of both glass and black box testing. They may be masked by fault tolerance or simply luck. Black box testing is concerned only with testing the specification. White box planning should commence as soon as all black box tests have been successfully passed. White box testing is much more expensive than black box testing. Synonyms for white box testing • • • • Glass Box testing Structural testing Clear Box testing Open Box Testing Types of White Box testing A typical rollout of a product is shown in figure 1 below. or behavioral testing or capture replay automation (activities). boundary or security testing (risks). and logs (evaluation). inspection or code-coverage automation (activities). structural testing. Structural testing is sometimes referred to as clearbox testing since white boxes are considered opaque and do not really permit visibility into the code. 2. or testing based on probes. system or requirements-based testing (coverage). assertions.Summary : Black box testing can sometimes describe user-based testing (people). usability testing (risk). White box testing. Performance Testing Process & Methodology 20 - Proprietary & Confidential - . unit or codecoverage testing (coverage).6 WHITE BOX TESTING Software testing approaches that examine the program structure and derive test data from the program logic. can sometimes describe developer-based testing (people). on the other hand. 3. naming conventions and libraries. Allocation of resources to design. test suite execution. results verification and documentation capabilities. drivers and test object libraries. Establishment and maintenance of regression test suites and procedures. document and manage a test history library. In general. 5. Improve quality by optimizing performance. Practices : This section outlines some of the general practices comprising white-box testing process. 2. The allocation of resources to perform class and method analysis and to document and review the same.The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. Development and use of standard procedures. Perform complete coverage at the component level. 4. Provide a complementary function to black box testing. Developing a test harness made up of stubs. Performance Testing Process & Methodology 21 Proprietary & Confidential - . white-box testing practices have the following considerations: 1. The means to develop or acquire tool support for automation of capture/replay/compare. 6. These are test cases that exercise basic set will execute every statement at least once. 1. Boolean operators and parentheses. • Boolean expression : Condition without Relational expressions.1.1 Code Coverage Analysis 1.1 Conditions Testing Condition testing aims to exercise all logical conditions in a program module. They may define: • Relational expression: (E1 op E2). • Simple condition: Boolean variable or relational expression.2 Data Flow Testing Selects test paths according to the location of definitions and use of variables.. This value gives the number of independent paths in the basis set.2.1 Basis Path Testing A testing mechanism proposed by McCabe whose aim is to derive a logical complexity measure of a procedural design and use this as a guide for defining a basic set of execution paths.1.2. Cyclomatic complexity provides upper bound for number of tests required to guarantee coverage of all program statements.2 Cyclomatic Complexity The cyclomatic complexity gives a quantitative measure of 4the logical complexity. possibly proceeded by a NOT operator. An independent path is any path through a program that introduces at least one new set of processing statements or a new condition (i.e.2 Control Structure testing 1. 1. and an upper bound for the number of tests to ensure that each statement is executed at least once. where E1 and E2 are arithmetic expressions. 1. Performance Testing Process & Methodology 22 Proprietary & Confidential - . a new edge).1 Flow Graph Notation A notation for representing control flow similar to flow charts and UML activity diagrams. 1. • Compound condition: composed of two or more simple conditions. and unstructured. Can define loops as simple. These contracts specify such requirements as: • Conditions that the client must meet before a method is invoked. • Conditions that a method must meet after it executes. Examples: Note that unstructured loops are not to be tested .htm] are used to perform this function. Basically. concatenated. 2 Design by Contract (DbC) DbC is a formal way of using comments to incorporate specification information into the code itself. rather.3 Loop Testing Loops fundamental to many algorithms. • Assertions that a method must satisfy at specific points of its execution Tools that check DbC contracts at runtime such as JContract [http://www.1. nested.parasoft.2.com/products/jtract/index. they are redesigned. the code specification is expressed unambiguously using a formal language that describes the code's implicit contracts. Performance Testing Process & Methodology 23 Proprietary & Confidential - . research. notification and logging are checked against references to validate program design. local or distributed. Each of the individual parameters is tested individually against a reference data set. 4 Error Handling Exception and error handling is checked thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. Advantages of White Box Testing • • • • Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects Disadvantages of White Box Testing • • Expensive Cases omitted in the code could be missed out. Consistency. Durability).com/journal/sj/391/kazi. Proper error recovery.ibm. Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases and other XA compliant transaction processors. may be validated to ensure that ACID (Atomicity. It identifies routines that are consuming the majority of the CPU time so that problems may be tracked down to improve performance.3 Profiling Profiling provides a framework for analyzing Java code performance for speed and heap memory use. Performance Testing Process & Methodology 24 - Proprietary & Confidential - . Third party tools such as JaViz [http://www. These include the use of Microsoft Java Profiler API and Sun’s profiling tools that are bundled with the JDK. Isolation.html] may also be used to perform this function. 5 Transactions Systems that employ transaction. Performance Testing Process & Methodology 25 - Proprietary & Confidential - . This icon should correspond to the Original Icon under Program Manager. 3. English and clarity. then use all un-grayed options. The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist while testing a product / application. The main window of the application should have the same caption as the caption of the icon in Program Manager. or it can refer to testing the functionality of each and every component involved.especially the error messages.3 GUI Testing What is GUI Testing? GUI is the abbreviation for Graphic User Interface. Window should return to an icon on the bottom of the screen. No Login is necessary.) F1 key should work the same. version number. Try to start the application twice as it is loading. If Window has a Minimize Button. It is absolutely essential that any application has to be user-friendly. click it.Windows Compliance Testing 3. Check all text on window for Spelling/Tense and Grammar. Closing the application should result in an "Are you Sure" message box Attempt to start application twice. especially on the top of the screen. Double Click the Icon to return the Window to its original size. If there is no hour glass. If the screen has a Control menu. GUI Testing can refer to just ensuring that the look-and-feel of the application is acceptable to the user. Hence it becomes very essential to test the GUI components of any application. All screens should have a Help button (i. then the hour glass should be displayed. The Loading message should show the application name. Check does the title of the window make sense. Performance Testing Process & Methodology 26 Proprietary & Confidential - . On each window. then some enquiry in progress message should be displayed. This should not be allowed .you should be returned to main window.1 Application Start Application by Double Clicking on its ICON.e.1. and a bigger pictorial representation of the icon. These should be checked for spelling. The window caption for every application should have the name of the application and the window name . The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. if the application is busy.1 Section 1 . or on the text should SET/UNSET the box.Letters in amount fields. Use SHIFT+TAB to move focus backwards. followed by a colon tight to it.2 Text Boxes Move the Mouse Cursor over all Enterable Text Boxes. Select with mouse by clicking. and Up to Down within a group box on the screen. SPACE should do the same.1. The text in the Micro Help line should change . Selection should also be possible with mouse. All others are gray. All tab buttons should have a distinct letter. Tab order should be left to right. This is indicated by a letter underlined in the button text.* etc. Pressing ALT+Letter Performance Testing Process & Methodology 27 Proprietary & Confidential - . List boxes are always white background with black text whether they are disabled or not. All Buttons except for OK and Cancel should have a letter Access to them.1. clarity and non-updateable etc.4 Check Boxes Clicking with the mouse on the box. Double Click should select all text in box. It should not be possible to select them with either the mouse or by using TAB. try strange characters like + . double-clicking is not essential. and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots. 3.Check for spelling. In general. All controls should get focus .1.5 Command Buttons If Command Button leads to another Screen. Refer to previous page. If a field is disabled (grayed) then it should not get focus. In a field that may or may not be updateable. So should Up and Down. or cursor. Cursor should change from arrow to Insert Bar. the label text and contents changes from black to gray depending on the current status. Tabbing to an entry field with text in it should highlight the entire text in the field.should be stopped Check the field width with capitals W. .1. Try this for every grayed control. everything can be done using both the mouse and the keyboard. SHIFT and Arrow should Select Characters.Use TAB to move focus around the Window.indicated by dotted box. In general. All text should be left justified. in All fields. Enter text into Box Try to overflow the text by typing to many characters . 3. 3. 3. Enter invalid characters . Never updateable fields should be displayed with black text on a gray background with a black label.3 Option (Radio Buttons) Left and Right arrows should move 'ON' Selection. If it doesn't then the text in the box should be gray or non-updateable. One button on the screen should be default (indicated by a thick black border).Press RETURN . there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action.1 Aesthetic Conditions: 1.2.This should activate The above are VERY IMPORTANT.should activate the button. Make sure only one space appears. or using the Up and Down Arrow keys. Clicking Arrow should allow user to choose from list 3. Pressing a letter should bring you to the first item in the list with that start with that letter. then pressing <Esc> should activate it. Tab to another type of control (not a command button).Screen Validation Checklist 3. 3. Pressing Return in ANY no command button control should activate it.F4’ should open/drop down the list box. If there is a Cancel Button on the screen.).Press SPACE This should activate Tab to each button .1. Spacing should be compatible with the existing windows spacing (word etc. 3. Pressing a letter should take you to the first item in the list starting with that letter. This List may be scrollable.2 Section 2 .1. shouldn't have a blank line at the bottom. which is at the top or the bottom of the list box. Make sure there is no duplication. Drop down with the item selected should be display the list with the selected item on the top. Force the scroll bar to appear. Pressing ‘Ctrl . Click each button once with the mouse .g.7 Combo Boxes Should allow text to be entered. If there is a 'View' or 'Open' button besides the list box then double clicking on a line in the List Box. 3. Is the general screen background the correct color? 2.1. closing an action step. If pressing the Command button results in uncorrectable data e.8 List Boxes Should allow a single selection to be chosen. Are the field prompts the correct color? Performance Testing Process & Methodology 28 Proprietary & Confidential - . by clicking with the mouse. Items should be in alphabetical order with the exception of blank/none.6 Drop Down List Boxes Pressing the Arrow should give list of options. make sure all the data can be seen in the box. and should be done for EVERY command Button.This should activate Tab to each button . You should not be able to type text in the box. should act in the same way as selecting and item in the list box. then clicking the command button. Is all the error message text spelt correctly on this screen? 18. Is the user required to fix entries. Assure that all dialog boxes have a consistent look and feel. If the user enters an invalid value and clicks on the OK button (i. Should the screen be allowed to minimize? 13. are the field backgrounds the correct color? 6.2. Are all numeric fields right justified? This is the default unless otherwise specified. which have failed validation tests? 3. Are all the field prompts aligned perfectly on the screen? 9. Are all the field prompts spelt correctly? 14. Are all group boxes aligned correctly on the screen? 11. 3. Is all the micro-help text spelt correctly on this screen? 17. Are all the field edit boxes aligned perfectly on the screen? 10. For all numeric fields check whether negative numbers can and should be able to be entered. Is validation consistently applied at screen level unless specifically required at field level? 6. Where the database requires a value (other than null) then this should be defaulted into fields. Is all user input captured in UPPER case or lowercase consistently? 19. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message? 5. Is the text in all fields specified in the correct screen font? 8. 20. Should the screen be resizable? 12. Are the field backgrounds the correct color? 4.e. 15. In read-only mode. 21.2 Validation Conditions: 1. For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size? Performance Testing Process & Methodology 29 Proprietary & Confidential - . 7. Assure that all windows have a consistent look and feel.3. 16. The user must either enter an alternative valid value or leave the default value intact. For all numeric fields check the minimum and maximum values and also some mid-range values allowable? 8. Are all the screen prompts specified in the correct screen font? 7. Does a failure of validation on every field cause a sensible user error message? 2. In read-only mode. Have any fields got multiple validation rules and if so are all rules being applied? 4. Are all character or alphanumeric fields left justified? This is the default unless otherwise specified. are the field prompts the correct color? 5. Do the Shortcut keys work correctly? 5. Do all mandatory fields require user input? 10. Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.) Is the user prevented from accessing other functions when this screen is active and is this correct? 7. (If any field. Can all screens accessible by double clicking on a list control be accessed correctly? 6. Is all date entry required in the correct format? 3. which initially was mandatory. 7. Is the cursor positioned in the first input field or control when the screen is opened? 12. Can a number of instances of this screen be opened at the same time and is this correct? 3. Are all the dropdowns on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.3 Navigation Conditions: 1. Can the screen be accessed correctly from the toolbar? 3. Can the screen be accessed correctly by double clicking on a list control on the previous screen? 4. 2. Does the default button work correctly? 14. Is the screen modal? (i.e.2. Are all disabled fields avoided in the TAB sequence? 9.) 3. Can all screens accessible via buttons on this screen be accessed correctly? 5.2. When an error message occurs does the focus return to the field in error when the user cancels it? Performance Testing Process & Methodology 30 Proprietary & Confidential - . Can the cursor be placed in read-only fields by clicking in the field with the mouse? 11. Have all pushbuttons on the screen been given appropriate Shortcut keys? 4.4 Usability Conditions: 1. Are all read-only fields avoided in the TAB sequence? 8.9. Have the menu options that apply to your screen got fast keys associated and should they have? 6. Can the screen be accessed correctly from the menu? 2. If any of the database columns don't allow null values then the corresponding screen fields must be mandatory. Is there a default button specified on the screen? 13. has become optional then check whether null values are allowed in this field. Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse? 10. 15. Modes (Editable Read-only) Conditions: Are the screen and field colors adjusted correctly for read-only mode? Should a read-only mode be provided for this screen? Are all fields and controls disabled in read-only mode? Can the screen be accessed from the previous screen/menu/toolbar in readonly mode? 5. If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers? 6. Check that no validation is performed in read-only mode. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.2. 3.7 1. 4.g. Assure that all buttons on all tool bars have a corresponding key commands. and thus the required initial values can be incorrect. Is the data saved when the window is closed by double clicking on the close box? 2.) 7.5 Data Integrity Conditions: 1. Where the database requires a value (other than null) then this should be defaulted into fields. Check the maximum field lengths to ensure that there are no truncated characters? 3.2. Performance Testing Process & Methodology 31 - . B and C then what happens if a blank value is retrieved from the database? (In some situations rows can be created on the database by other functions. 3. 2. Check maximum and minimum field values for numeric fields? 5.) Beware of truncation (of strings) and rounding of numeric values. Assure that the proper commands and options are in each menu. The user must either enter an alternative valid value or leave the default value intact.e. General Conditions: Assure the existence of the "Help" menu. If a particular set of data is saved to the database check that each value gets saved fully to the database. Can all screens available from this screen be accessed in read-only mode? 6. 4. Proprietary & Confidential - 3. a 30 character field should be a lot longer 3. (i.6 1. 3. 2. which are not screen based.2. If a set of radio buttons represents a fixed set of values such as A. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application? 16. When a command button is used sometimes and not at other times. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost . 23. Assure that each menu command has an alternative (hot-key) key sequence. which makes sense according to the function of the window/dialog box. Assure that each window/dialog box has a clearly marked default value (command button. 7. In drop down list boxes. Assure that each command button can be accessed via a hot key combination. as a Close button when changes have been made that cannot be undone. Assure that the cancel button functions the same as the escape key. or in a particular dialog box. 11. If hot keys are used to access option buttons. – (i. 18. 22.Continue yes/no" 9. Assure that OK and Cancel buttons are grouped separately from other command buttons. which will invoke it where appropriate. ensure that the names are not abbreviations / cut short 6. Assure that option button names are not technical labels. assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations. but rather are names meaningful to system users. 14. Assure that command buttons are all of similar size and shape. Performance Testing Process & Methodology 32 Proprietary & Confidential - . Assure that focus is set to an object/button. Assure that command button names are not abbreviations. 15. Assure that command buttons in the same window/dialog box do not have duplicate hot keys. Assure that the Cancel button operates. Assure that all option buttons (and radio buttons) names are not abbreviations. 5. Assure that option box names are not abbreviations. 16. In drop down list boxes. Ensure that duplicate hot keys do not exist on each screen 8. Assure that only command buttons. 24. 13. 10. assures that it is grayed out when it should not be used. or other object) which is invoked when the Enter key is pressed . 19.e) make sure they don't work on the screen behind the current screen. 21. but rather are names meaningful to system users. and same font & font size. Assure that all field labels/names are not technical labels. are present. 12.and NOT the Cancel or Close button 20. which are used by a particular window. assure that duplicate hot keys do not exist in the same window/dialog box.4. 17. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window 35. which traverses the screens. (i. Tabbing will open next tab within tabbed window if on last field of current tab 34. Assure that the screen/window does not have a cluttered appearance 31. 40.25. Shift + Ctrl + F6 opens previous tab within tabbed window 33. Assure that leap years are validated correctly & do not cause errors/miscalculations. Assure that the Tab key sequence. Ensure all fields are disabled in read-only mode 45.e the tab is opened. All fonts to be the same 42. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind).1 Date Field Checks 1. Ctrl + F6 opens next tab within tabbed window 32. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. Return operates continue 47. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics). 30. display all options on open of list box should be no need to scroll 38. 43. 27. On open of tab focus will be on first editable field 41. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs. Assure consistency of mouse actions across windows. If 8 or less options in a list box. Tabbing will go onto the next editable field in the window 36. Assure that option boxes. Progress messages on load of tabbed screens 46. does so in a logical way. generating "changes will be lost" message if necessary.3 Specific Field Tests 3. Microhelp text for every enabled field & button 44. Banner style & size & display exact same as existing windows 37. 29. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate). Performance Testing Process & Methodology 33 Proprietary & Confidential - . If retrieve on load of tabbed window fails window should not open 3. 28.3. option buttons. highlighting the field with the error on it) 39. and command buttons are logically grouped together in clearly demarcated areas "Group Box" 26. 7. 3. Include data items with last position blank. Assure that 00 and 13 are reported as errors. 5. 6. 30 are validated correctly & do not cause errors/ miscalculations. Include out of range values above the maximum and below the minimum. 29. Assure that division by zero does not occur. Assure that out of cycle dates are validated correctly & do not cause errors/miscalculations.Substitute your specific commands Add View Performance Testing Process & Methodology 34 Proprietary & Confidential - .2 1. 6. 4.3 1. 9.3. Assure that month code 00 and 13 are validated correctly & do not cause errors/miscalculations. Assure that century change is validated correctly & does not cause errors/ miscalculations. 4. Assure that both + and . Include data items with first position blank.4.1 Examples of Standard Actions . Assure that Feb. Include value zero in all calculations. Validation Testing . Include maximum and minimum range values. 8. 4. 8. 11. Alpha Field Checks Use blank and non-blank data.Standard Actions 3. Assure that fields with a blank in the last position are processed or reported as an error an error.2. 3. 6. 5. Assure that Feb. 10. 2. Assure that valid values are handles by the correct procedure. Include invalid characters & symbols. 30 is reported as an error. 28. Include lowest and highest values. 3. 12.4 3. Include valid characters.3. 5. Include at least one in-range value. Assure that numeric fields with a blank in position 1 are processed or reported as an error. Assure that day values 00 and 32 are validated correctly & do not cause errors/miscalculations.values are correctly processed. 2. 3. 7. Numeric Fields Assure that lowest and highest values are handled correctly. 3. Assure that invalid values are logged and reported. Assure that upper and lower values in ranges are handled correctly. abandon changes or additions) Fill each field .Valid data Fill each field .2 Shortcut keys / Hot Keys Note: The following keys are used in some windows applications. Child window.e. and are included as a guide.e. continue saving changes or additions) Add View Change Delete Cancel .Invalid data Different Check Box / Radio Box combinations Scroll Lists / Drop Down List Boxes Help Fill Lists and Scroll Tab Tab Sequence Shift Tab 3.4. Key F1 F2 F3 F4 No Modifier Help N/A N/A N/A Shift Enter Mode N/A N/A N/A CTRL Help N/A N/A N/A ALT N/A N/A N/A Close Close Document / Application.Change Delete Continue .(i.(i. N/A N/A N/A Add N/A Proprietary & Confidential F5 F6 F7 F8 N/A N/A N/A Toggle N/A N/A N/A extend Toggle N/A N/A N/A N/A - Performance Testing Process & Methodology 35 - . supported. N/A F11. 'File').3 Control Shortcut Keys Key CTRL + Z CTRL + X CTRL + C CTRL + V CTRL + N CTRL + O CTRL + P Function Undo Cut Copy Paste New Open Print Proprietary & Confidential - Performance Testing Process & Methodology 36 - . N/A N/A N/A N/A Switch to previously used application. N/A Toggle menu bar N/A activation. active/editable field. F12 N/A Tab Move to next Move to active/editable previous field. (Holding down the ALT key displays all open applications). N/A if N/A N/A N/A Move to next open Document or Child window.4.mode. supported. 3. F9 F10 N/A if mode.g. (Adding SHIFT reverses the order of movement). Alt Puts focus on N/A first menu command (e. CTRL + S CTRL + B CTRL + I CTRL + U Save Bold* Italic* Underline* * These shortcuts are suggested for text formatting applications. in the context for which they make sense. Applications may use other modifiers for these operations. Performance Testing Process & Methodology 37 - Proprietary & Confidential - . − Regression testing is a normal part of the program development process. The reason they might not work because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed. the old test cases are run against the new version to make sure that all the old capabilities still work. − It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. − The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. − Before a new version of a software product is released. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. Also referred to as verification testing − Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors.4 4.1 Regression Testing What is regression Testing − Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. Performance Testing Process & Methodology 38 - Proprietary & Confidential - . This cycle can group medium-level tests. 4. refer to the testing goals you defined at the beginning of the process. Following are examples of some general categories of test cycles to consider: • sanity cycle checks the entire system at a basic level (breadth. This cycle should include basic-level tests containing mostly positive checks. A related group of tests is called a test cycle. to determine the application's stability before beginning more rigorous testing. The tests in the cycle cover the entire application (breadth).1 Create Test Cycles During this stage you decide the subset of tests from your test database you want to execute. You can run the cycle each time a new build is ready. To decide which test cycles to build. advanced cycle tests both breadth and depth. you will want to execute the relevant parts of your test plan in order to locate defects and assess quality. Example: You can create another set of tests for a particular module in your application. Also consider issues such as the current state of the application and whether new functions have been added or modified. This cycle can be run when more time is available for testing.4. Each time your application changes. and also test advanced options in the application Proprietary & Confidential - • • Performance Testing Process & Methodology 39 - .2. and can include both manual and automated tests Example: You can create a cycle containing basic tests that run on each build of the application throughout development. rather than depth) to see that it is functional and stable. This test cycle includes tests that check that module in depth. At different stages of the quality assurance process. you need to execute different tests in order to address specific goals.2 Test Execution Test Execution is the heart of the testing process. containing both positive and negative checks. Usually you do not run all the tests at once. normal cycle tests the system a little more in depth than the sanity cycle. (for example. And have to identify all the failed steps in the tests and to determine whether a bug has been detected. 4. Any major or minor request is considered a problem with an application and will be entered as a change request. • regression cycle tests maintenance builds. The goal of this type of cycle is to verify that a change to one part of the software did not break the rest of the application. It then imports results. as well as in-depth tests for the specific area of the application that was modified. you begin executing the tests in the cycle. Testing Tools runs the tests one at a time.3.3 Change Request 4. compare the application output with the expected output. or wants to recommend an enhancement. notices a problem with an application. enter input. a letter is allowed to be entered in a number field) Performance Testing Process & Methodology 40 - Proprietary & Confidential - .3 Analyze Test Results After every test run one analyze and validate the test results. and log the results. − With Manual Test Execution you follow the instructions in the test steps of each test.3.2 Run Test Cycles (Automated & Manual Tests) Once you have created cycles that cover your testing objectives. 4. 4. − During Automated Test Execution you create a batch of tests and launch the entire batch at once.2 Type of Change Request Bug the application works incorrectly or provides incorrect information.(depth). A regression cycle includes sanity-level tests for testing the entire software.1 Initiating a Change Request A user or developer wants to suggest a modification that would improve an existing application. A test cycle is complete only when all tests-automatic and manual-have been run. Testing Tools executes automated tests for you.2. You perform manual tests using the test steps. You use the application. providing outcome summaries for each test. For each test step you assign either pass or fail status. 4. or if the expected result needs to be updated.2. a new field. you report the bugs (or defects) that you detected.2 Track and Analyze Bugs Proprietary & Confidential - The lifecycle of a bug begins when it is reported and ends when it is fixed.Change a modification of the existing application. a new report.4. or a new button) 4. Critical the application does not work. 4.4. When you report a bug. sorting the files alphabetically by the second field rather than numerically by the first field makes them easier to find) Enhancement new functionality or item added to the application. High the application works. job functions are impaired and there is no work around. testers. and end-users in all phases of the testing process.1 Report Bugs Once you execute the manual and automated tests in a cycle. Bug Tracking involves two main stages: reporting and tracking. but this is necessary to perform a job. 4.4 Bug Tracking − Locating and repairing software bugs is an essential part of software development. (for example. You also make sure that the QA and development personnel involved in fixing the bug are notified. The bugs are stored in a database so that you can manage them and analyze the status of your application. (for example.3 Priority for the request Low the application works but this would make the function easier or more user friendly. you record all the information necessary to reproduce and fix it.3. Performance Testing Process & Methodology 41 - . This also applies to any Section 508 infraction. − Bugs can be detected and reported by engineers. 4. − Information about bugs must be detailed and organized in order to schedule bug fixes and determine software release dates. and closed. fix. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. − Software developers fix the Open bugs and assign them the status Fixed. If a bug is detected again. it is reopened. Numbers for products are established in a configuration management (CM) plan. and follow up the bug. The number of open or fixed bugs is a good indicator of the quality status of your application. These bugs are given the status Open and are assigned to a member of the development team. − QA personnel test a new build of the application. all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed.5 Traceability Matrix A traceability matrix is created by associating requirements with the products that satisfy them. You can use data analysis tools such as re-ports and graphs in interpret bug data. − The Quality Assurance manager or Project manager periodically reviews all New bugs and decides which should be fixed. Traceability requires unique identifiers for each requirement and product. If a bug does not reoccur. it is Closed. and provide all necessary information to reproduce. There can be more things included in a traceability matrix than shown below. 4. − First you report New bugs to the database. Performance Testing Process & Methodology 42 - Proprietary & Confidential - . Communication is an essential part of bug tracking.verified. Below is a simple traceability matrix structure. Performance Testing Process & Methodology 43 - Proprietary & Confidential - . that all lower level requirements derive from higher level requirements. User requirement identifiers begin with "U" and system requirements with "S. The examples below show traceability between user and system requirements. and that all higher level requirements are allocated to lower level requirements." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated. or the traceability corrected. rewritten. Traceability is also used in managing change and provides the basis for test planning. SAMPLE TRACEABILITY MATRIX A traceability matrix is a report from the requirements database or repository.Traceability ensures completeness. In addition to traceability matrices. What goes into each report depends on the information needs of those receiving the report(s). other reports are necessary to manage requirements. Determine their information needs and document the information that will be associated with the requirements when you set up your requirements database or repository Performance Testing Process & Methodology 44 - Proprietary & Confidential - . Installation. not its parts Techniques can be structural or functional. The integration of this code with the internal code is the important objective. Performance Criteria Performance Test Case Documents Software Requirement Specification.2 Types and Phases of Testing SDLC Document QA Document Software Requirement Specification Requirement Checklist Design Document Design Checklist Functional Specification Functional Checklist Design Document & Functional Specs Unit Test Case Documents Design Document & Functional Specs Integration Test Case Documents Design Document & Functional Specs System Test Case Documents Unit / System / Integration Test Case Regression Test Case Documents Documents Functional Specs. Unit testing. Techniques can be used in any stage that tests the system as a whole (System testing .) 5. Goal is to evaluate the system as a whole. Performance Test Case Documents Performance Testing Process & Methodology 45 - Proprietary & Confidential - .1 Introduction The Primary objective of testing effort is to determine the conformance to requirements specified in the contracted documents. Unit / User Acceptance Test Case System / Integration / Regression / Documents. etc.Acceptance Testing.5 Phases of Testing 5. 3 The “V”Model Requirements Acceptance Testing Specification System Testing Architecture Integration Testing Detailed Design Unit Testing Coding Performance Testing Process & Methodology 46 - Proprietary & Confidential - .5. Requirement Study Requirement Checklist Software Requirement Specification Functional Specification Checklist Functional Specification Document Architecture Design Detailed Design Document Coding Software Requirement Specification Functional Specification Document Architecture Design Functional Specification Document Design Document Functional Unit Test Case Documents Unit Test Case Document System Test Case Document Integration Test Case Document Regression Test Case Document Specification Document Unit/Integratio n/System Test Case Documents Functional Specification Performance Document Criteria Software Requirement Regression Specification Test Case Performance Document Test Cases and Scenarios Performance Test Cases and Scenarios User Acceptance Test Case Documents/Sce narios Performance Testing Process & Methodology 47 - Proprietary & Confidential - . Requirement s Regression Round 3 Performance Testing Requirement s Review Specification Regression Round 2 Specification Review System Testing Architecture Regression Round 1 Architectur e Review Integration Testing Detailed Design Design Review Code Unit Testing Code Walkthrough Performance Testing Process & Methodology 48 - Proprietary & Confidential - . combining module testing with the lowest level of subsystem integration testing. The key is to leverage the overall integration structure to allow rigorous testing at each phase while minimizing duplication of effort. Many projects compromise. performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. beginning with assembling modules into low-level subsystems. the system would fail in so many places at once that the debugging and retesting effort would be impractical Second. verifying the details of each module's implementation in an integration context. Combining module testing with bottom-up integration. In a multi-phase integration. most integration testing has been traditionally limited to ``black box'' techniques. To be most effective. In fact. an integration testing technique should fit well with the overall integration strategy. and finally assembling the highest level subsystems into the complete system. then assembling subsystems into larger subsystems. It is important to understand the relationship between module testing and integration testing. so an integration testing method should be flexible enough to accommodate them all. In general. Very small systems are often assembled and tested in one phase. Integration may be performed all at once. Each of these views of integration testing may be appropriate for any given project. However. integration testing concentrates entirely on module interactions. Large systems may require many integration phases. satisfying any white box testing criterion would be very difficult. because of the vast amount of detail separating the input data from the individual code modules. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk "big bang" approach. and then performing pure integration testing at higher levels. In one view. assuming that the details within each module are accurate. this is impractical for two major reasons. bottomup. critical piece first. At the other extreme.6 Integration Testing One of the most significant aspects of a software development project is the integration strategy. the larger the project. modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. First. Performance Testing Process & Methodology 49 Proprietary & Confidential - . top-down. or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. Then. module and integration testing can be combined. the more important the integration strategy. For most real systems. testing at each phase helps detect errors early and keep the system under control. More useful generalizations adapt the module testing criterion to focus on interactions between modules rather than attempting to test all of the details of each module's implementation in an integration context. the approach is the same. Applying it to each phase of a multi-phase integration strategy. can be generalized to require each module call statement to be exercised during integration testing. Since structured testing at the module level requires that all the decision logic in a Performance Testing Process & Methodology 50 Proprietary & Confidential - . this trivial kind of generalization does not take advantage of the differences between module and integration testing. in which each statement is required to be exercised during module testing. As discussed in the previous subsection. However. leads to an excessive amount of redundant testing. The statement coverage module testing criterion. the most obvious generalization is to satisfy the module testing criterion in an integration context. for example. in effect using the entire program as a test driver environment for each module.1 Generalization of module testing criteria Module testing criteria can often be generalized in several possible ways to support integration testing. Although the specifics of the generalization of structured testing are more detailed.6. Although not strictly a reduction rule. so that it is possible to exercise them independently during integration testing. The remaining rules work together to eliminate the parts of the flow graph that are not involved with module calls. and looping rules each remove one edge from the flow graph. structured testing at the integration level focuses on the decision outcomes that are involved with module calls. since for poorly-structured code it may be hard to distinguish the ``top'' of the loop from the ``bottom. remove all control structures that are not involved with module calls. the appropriate generalization to the integration level requires that just the decision logic involved with calls to other modules be tested independently. The sequential rule eliminates sequences of non-call ("white dot") nodes. the call rule states that function call ("black dot") nodes cannot be reduced. Figure 7-2 shows a systematic set of rules for performing design reduction. The looping rule eliminates bottom-test loops that are not involved with module calls. Rules 1 through 4 are intended to be applied iteratively until none of them can be applied. The repetitive rule eliminates top-test loops that are not involved with module calls. Performance Testing Process & Methodology 51 - Proprietary & Confidential - . and then use the resultant "reduced" flow graph to drive integration testing. Since the repetitive. However.module's control flow graph be tested independently.'' For the rule to apply. they each reduce cyclomatic complexity by one. The design reduction technique helps identify those decision outcomes. The idea behind design reduction is to start with a module control flow graph. Since application of this rule removes one node and one edge from the flow graph. It is important to preserve the module's connectivity when using the looping rule. conditional. even very complex logic can be eliminated as long as it does not involve any module calls. it leaves the cyclomatic complexity unchanged. at which point the design reduction is complete. it does simplify the graph so that the other rules can be applied. By this process. The conditional rule eliminates conditional statements that do not contain calls in their bodies. Module design complexity Rather than testing all decision outcomes within a module independently. there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module exit. Incremental integration Hierarchical system design limits each stage of development to a manageable effort. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases. The key principle is to test just the interaction among Performance Testing Process & Methodology 52 Proprietary & Confidential - . and it is important to limit the corresponding stages of testing as well. including support for hierarchical design. which simplifies the derivation of data sets that test interactions among components. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration. To form a completely flexible "statement testing" criterion. only two additional tests are required to complete the integration testing. However. and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. and the component module design complexity of module C is 2. Performance Testing Process & Methodology 53 - Proprietary & Confidential - . it is required that all module call statements from one component into a different component be exercised at each integration stage. Given hierarchical integration stages with good cohesive partitioning properties. Modules A and C have been previously integrated. The key is to perform design reduction at each integration phase using just the module call nodes that cross component boundaries. Figure 7-7 illustrates the structured testing approach to incremental integration. and exclude from consideration all modules that do not contain any cross-component calls. this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. yielding component-reduced graphs. since the design predicate decision to call module D from module B has been tested in a previous phase. avoiding redundant testing of previously integrated sub-components. Modules B and D are removed from consideration because they do not contain cross-component calls. Structured testing can be extended to cover the fully general case of incremental integration in a similar manner. It would take three tests to integrate this system in a single phase. it is required that each statement be executed during the first phase (which may be anything from single modules to the entire program). To extend statement coverage to support incremental integration.components at each integration stage. the component module design complexity of module A is 1. as have modules B and D. Performance Testing Process & Methodology 54 - Proprietary & Confidential - . 7 Acceptance Testing 7. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. As in any system though. the means by which 'Acceptance' will be achieved. Performance Testing Process & Methodology 55 Proprietary & Confidential - .1 Introduction – Acceptance Testing In software engineering. To be of real use. The test procedures that lead to formal 'acceptance' of new or changed systems. Project Team. and in detail. The main types of software testing are: Component. Acceptance Testing checks that the system delivers what was requested. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. Acceptance Testing checks the system against the "Requirements". The final part of the UAT can also include a parallel run to prove the system against the current system. System.2 Factors influencing Acceptance Testing The User Acceptance Test Plan will vary from system to system but. The customer. The testing can be based upon the User Requirements Specification to which the system should conform. an Acceptance Test Plan should be developed in order to plan precisely. acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. but at all times they are informed by the business needs. 7. It is similar to systems testing in that the whole system is checked but the important difference is the change in focus: Systems Testing checks that the system that was specified has been delivered. Acceptance. problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned. Interface. and not the developer should always do acceptance testing. in general. the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. Vendors and possibly Consultants / Contractors. Release. The forms of the tests may follow those in system testing. including Users. where appropriate Regression Testing. the maximum number of acceptable 'outstandings' in any particular category.e. testing can continue but we cannot go into production (live) with this problem Major Problem. you may demand that any problems in severity level 1. found during testing. it is crucial to agree the Criteria for Acceptance. The users of the system. these will be known in advance and your organisation is forewarned. or if there are. colours. must then agree upon the responsibilities and required actions for each category of problem. and '6' has the least impact :'Show Stopper' i.In order to agree what such responses should be. users may agree to accept ('sign off') the system subject to a range of conditions. testing can continue and the system is likely to go live with only minimal departure from agreed business processes Minor Problem . in terms of business / commercial impact. of a problem with the system. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems. pitch size However. Finally. if such features are key to the business requirements they will warrant a higher severity level. Again. both testing and live operations may progress. it is impossible to continue with the testing because of the severity of this error / bug Critical Problem. In any event. but little or no changes to business processes are envisaged 'Cosmetic' Problem e. Caution. Even where the severity levels and the responses to each have been agreed by all parties. the allocation of a problem into its appropriate severity level can be subjective and open to question. receive priority response and that all testing will cease until such level 1 problems are resolved. any and all fixes from the software developers. must be subjected to rigorous System Testing and. in consultation with the executive sponsor of the project. Because no system is entirely fault free. For example. it must be agreed between End User and vendor. In some cases. perhaps unintentionally. we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement. N.B. testing can continue but live this feature will cause severe disruption to business processes in live operation Medium Problem. prior consideration of this is advisable.g. These levels will range from (say) 1 to 6 and will represent the relative severity. or. These conditions need to be analysed as they may. the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. fonts. seek additional functionality which could be classified as scope creep. '1' is the most severe. Here is an example which has been used successfully. This problem should be corrected. Performance Testing Process & Methodology 56 - Proprietary & Confidential - . Performance Testing Process & Methodology 57 - Proprietary & Confidential - .3 Conclusion Hence the goal of acceptance testing should verify the overall quality.7. scalability. portability. usability. and robustness of the functional components supplied by the Software system. completeness. correct operation. because this would be too redundant. whether testing a financial system. Making this function more effective can deliver a range of benefits including reductions in risk.8 SYSTEM TESTING 8. railway. In this paper we examine the results of applying several types of Poisson-process models to the development of a large system for which system test was performed in two parallel tracks. A number of time-domain software reliability models attempt to predict the growth of a system's reliability during the system test phase of the development life cycle. The difference between function testing and system testing is that now the focus is on the whole application and its environment . it also contains some aspects that are orientated on the word ``system'' . and nevertheless. system testing does not only deal with this more economical problem. it is one of the most important. software and system testing represents a significant element of a project's cost in terms of money and management time. however. Moreover. the validation process does not often receive the required attention. this again includes the question. like a mulituser network or whetever. Therefore the program has to be given completely. In other words. development costs and improved 'time to market' for new systems. the validation process is close to other activities such as conformance. and aeronautical and space. it is beyond doubt that this test cannot be done completely. Once again. These techniques can be applied flexibly. Even security guide lines have to be included. The main goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation. when appropriate. acceptance and qualification testing. It is often agreed that testing is essential to manufacture reliable products. an online casino or games testing. automotive. are good examples. ``Did we build the product right?'' However. System Testing is more than just functional testing. Industry sectors such as telecom. while this is one of the most incomplete test methods. This means that those tests should be done in the environment for which the program was designed. and can. we will test that the functionality of your systems meets with your specifications. ``Did we build the right product?'' and not just. ecommerce. also encompass many other types of testing. integrating with which-ever type of development methodology you are applying. However. Systems with software components and software-intensive systems are more and more complex everyday. using different strategies for test data selection. We test for errors that users are likely to make as they interact with the application as well as your application’s ability to trap errors gracefully. such as: Performance Testing Process & Methodology 58 Proprietary & Confidential - .1 Introduction to SYSTEM TESTING For most organizations. This does not mean that now single functions of the whole program are tested. etc. commercial and technical. 8.test how the system recovers from a disaster.test how the system fits in with existing operations and procedures in the user organization compliance testing . etc. has been proven over the last 3 decades to deliver real business benefits including: Reduce rework and support overheads More effort spent on developing new functionality and less on "bug fixing" as quality increases reduce commercial risks If it goes wrong. Your test strategy must take into consideration the risks to your organisation. not its parts Techniques can be structural or functional Techniques can be used in any stage that tests the system as a whole (acceptance. so why take a leap of faith while your competition step forward with confidence? These benefits are achieved as a result of some fundamental principles of testing. for example. operations testing . recovery testing . as a part of software engineering.test performance in terms of speed.test larger-than-normal capacity in terms of transactions. users.2 Need for System Testing Effective software testing. You will have a personal interest in its success in which case it is only human for your objectivity to be compromised.3 System Testing Techniques Goal is to evaluate the system as a whole.) Techniques not mutually exclusive Structural techniques stress testing . installation. etc. increased independence naturally increases objectivity. how it handles corrupted data. etc. what is the potential impact on your commercial goals? Knowledge is power. data. speed. execution testing. precision.o o o o o security load/stress performance browser compatibility localisation 8.test adherence to standards Performance Testing Process & Methodology 59 Proprietary & Confidential - reduction of costs increased productivity . test required error-handling functions (usually user error) manual-support testing . program.test that the system can be used properly .makes sure the system does what it’s required to do regression testing .fundamental form of testing .pick test cases that will produce output at the extremes of the output domain Structural techniques statement testing .test that the system is compatible with other systems in the environment control testing .make sure unchanged functionality remains unchanged error-handling testing .pick test cases representative of the range of allowable input. etc.4 Functional techniques input domain testing .each truth statement is exercised both true and false expression testing . then pick a test case from each partition boundary value .partition the range of allowable input so that the program is expected to behave similarly for all inputs in a given partition.choose test cases with input values at the boundary (both inside and outside) of the allowable range syntax checking .test required control mechanisms parallel testing .choose test cases that violate the format rules for input special values .feed same input into two versions of the system to make sure they produce the same output Unit Testing Goal is to evaluate some piece (file.includes user documentation intersystem handling testing .each branch of an if/then statement is exercised conditional testing .security testing . component.ensure the set of test cases exercises every statement at least once branch testing .) in isolation Techniques can be structural or functional In practice. low.design test cases that use input values that represent special situations output domain testing . module. it’s usually ad-hoc and looks a lot like debugging More structured approaches exist 8. including high.every part of every expression is exercised Performance Testing Process & Methodology 60 Proprietary & Confidential - . and average values equivalence partitioning .test security requirements Functional techniques requirements testing . an organization keeps records of the average numbers of defects in the products it produces.create mutants of the program by making single changes. then tests a new product until the number of defects found approaches the expected number 8.5 Conclusion: Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. especially with the top-down method. then run test cases until all mutants have been killed historical test data .put a certain number of known faults into the code. you can estimate whether or not you’ve found all of them or not fault seeding . then test until they are all found mutation testing .every path is exercised (impossible in practice) Error-based techniques basic idea is that if you know something about the nature of the defects in the code. System testing can occur in parallel with integration test.path testing . Performance Testing Process & Methodology 61 - Proprietary & Confidential - . Usually this is a vision of a grand table with every single method listed. can it be tested by inspection? If the code is simple enough that the developer can just look at it and verify its correctness then it is simple enough to not require a unit test. but there will be no design of their interactions. it is a little design document that says. Unit tests will most likely be defined at the method level. The programmer is then reduced to testing-by-poking-around. Isn't that some annoying requirement that we're going to ignore? Many developers get very nervous when you mention unit tests. "What will this bit of code do?" Or.9 9. but not relevant in most programming projects. Unit tests that isolate clusters of objects for testing are doubly useful. or which objects form a cluster. Hence: Unit tests isolate clusters of objects for future developers. along with the expected results and pass/fail date. then there is a high chance that not every component of the new code will get tested.1 Unit Testing Introduction to Unit Testing Unit testing. in the language of object oriented programming. and they also identify those segments of code that are related. In a sense. Performance Testing Process & Methodology 62 - Proprietary & Confidential - . interactions of objects are the crux of any object oriented design. because they test for failures. so the art is to define the unit test on the methods that cannot be checked by inspection. then the tests will be trivial and the objects might pass the tests. if the scope is too broad. Need for Unit Test How do you know that a method doesn't need a unit test? First. "What will these clusters of objects do?" The crucial issue in constructing a unit test is scope. It's important. Certainly. Likewise. People who revisit the code will use the unit tests to discover which objects are related. Usually this is the case when the method involves a cluster of objects. which is not an effective test strategy. The unit test will motivate the code that you write. If the scope is too narrow. The developer should know when this is the case. If error handling is performed in a method. Levels of Unit Testing •UNIT •100% code coverage • INTEGRATION • SYSTEM • • ACCEPTANCE • MAINTENANCE AND REGRESSION Concepts in Unit Testing: •The most 'micro' scale of testing. Life Cycle Approach to Testing Testing will occur throughout the project lifecycle i. Generally. •Typically done by the programmer and not by testers. • As it requires detailed knowledge of the internal program design and code.e. any method that can break is a good candidate for having a unit test. The danger of not implementing a unit test on every method is that the coverage may be incomplete. The programmer should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code.Another good litmus test is to look at the code and see if it throws an error or catches an error. The careful programmer will know that their unit testing is complete when they have verified that their unit tests cover every cluster of objects that form their application. Just because we don't test every method explicitly doesn't mean that methods can get away with not being tested. •To test particular functions or code modules. then that method can break. • To uncover an as-yet undiscovered error . Performance Testing Process & Methodology 63 Proprietary & Confidential - .. • Not always easily done unless the application has a well-designed architecture with tight code. from Requirements till User Acceptance Testing. because it may break at some time. and • Prepare a test case with a high probability of finding an as-yet undiscovered error..The main Objective to Unit Testing are as follows : •To execute a program with the intent of finding an error. and then the unit test will be there to help you fix it. 2 Unit Testing –Flow: driver Module interface local data structures boundary conditions independent paths error handling paths TestCases Types of Errors detected The following are the Types of errors that may be caught • Error in Data Structures • Performance Errors • Logic Errors • Validity of alternate and exception flows • Identified at analysis/design stages Unit Testing – Black Box Approach • Field Level Check • Field Level Validation • User Interface Check • Functional Level Check Unit Testing – White Box Approach STATEMENT COVERAGE DECISION COVERAGE CONDITION COVERAGE MULTIPLE CONDITION COVERAGE conditions) CONDITION/DECISION COVERAGE PATH COVERAGE (nested Performance Testing Process & Methodology 64 - Proprietary & Confidential - .9. .3 Execution of Unit Tests  Design a test case for every statement to be executed. Proprietary & Confidential - Performance Testing Process & Methodology 65 - . Unit Testing .  Select the unique set of test cases.Functionality Checks • Screen Functionalities • Field Dependencies • Auto Generation • Algorithms and Computations • Normal and Abnormal terminations • Specific Business Rules if any.Unit Testing – FIELD LEVEL CHECKS • Null / Not Null Checks • Uniqueness Checks • Length Checks • Date Field Checks • Numeric Checks • Negative Checks Unit Testing – Field Level Validations • Test all Validations for an Input field • Date Range Checks (From Date/To Date’s) • Date Check Validation with System date • • • • • • Unit Testing – User Interface Checks Readability of the Controls Tool Tips Validation Ease of Use of Interface Across Tab related Checks User Interface Dialog GUI compliance checks Unit Testing .  This measure reports whether each executable statement is encountered.OTHER MEASURES  FUNCTION COVERAGE LOOP COVERAGE RACE COVERAGE 9. d2.  Also known as: line coverage. Basic block coverage is the same as statement coverage except the unit of code measured is each sequence of non-branching statements. if (x<=30) d2=100. else if (s<1000) d1 = 95. if (s<200) d1=100. s. s=5*x + 10 *y. Example of Unit Testing: int invoice (int x. else d1 = 80. } Unit Testing Flow : Performance Testing Process & Methodology 66 - Proprietary & Confidential - . else d2=90. return (s*d1*d2/10000). segment coverage and basic block coverage. int y) { int d1. Advantage of Unit Testing § Can be applied directly to object code and does not require processing source code. § Performance profilers commonly implement this measure. DISADVANTAGE of Unit Testing §Insensitive to some control structures (number of iterations) §Does not report whether loops reach their termination condition §Statement coverage is completely insensitive to the logical operators (|| and &&). Performance Testing Process & Methodology 67 Proprietary & Confidential - Method for Statement Coverage -Design a test-case for the pass/failure of every decision point -Select unique set of test cases §This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false. §The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. §Additionally, this measure includes coverage of switch-statement cases, exception handlers, and interrupt handlers §Also known as: branch coverage, all-edges coverage, basis path coverage, decision-decision-path testing §"Basis path" testing selects paths that achieve decision coverage. § ADVANTAGE: Simplicity without the problems of statement coverage DISADVANTAGE §This measure ignores branches within boolean expressions which occur due to short-circuit operators. Method for Condition Coverage: -Test if every condition (sub-expression) in decision for true/false -Select unique set of test cases. §Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. § §Condition coverage measures the sub-expressions independently of each other. §Reports whether every possible combination of boolean sub-expressions occurs. As with condition coverage, the sub-expressions are separated by logical-and and logical-or, when present. §The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. DISADVANTAGE: §Tedious to determine the minimum set of test cases required, especially for very complex Boolean expressions §Number of test cases required could vary substantially among conditions that have similar complexity Performance Testing Process & Methodology 68 - Proprietary & Confidential - §Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. §It has the advantage of simplicity but without the shortcomings of its component measures §This measure reports whether each of the possible paths in each function have been followed. §A path is a unique sequence of branches from the function entry to the exit. §Also known as predicate coverage. Predicate coverage views paths as possible combinations of logical conditions §Path coverage has the advantage of requiring very thorough testing FUNCTION COVERAGE: § This measure reports whether you invoked each function or procedure. § It is useful during preliminary testing to assure at least some coverage in all areas of the software. § Broad, shallow testing finds gross deficiencies in a test suite quickly. LOOP COVERAGE This measure reports whether you executed each loop body zero times, exactly once, twice and more than twice (consecutively). For do-while loops, loop coverage reports whether you executed the body exactly once, and more than once. The valuable aspect of this measure is determining whether while-loops and forloops execute more than once, information not reported by others measure. RACE COVERAGE This measure reports whether multiple threads execute the same code at the same time. Helps detect failure to synchronize access to resources. Useful for testing multi-threaded programs such as in an operating system. 9.4 Conclusion Testing irrespective of the phases of testing should encompass the following :  Cost of Failure associated with defective products getting shipped and used by customer is enormous  To find out whether the integrated product work as per the customer requirements  To evaluate the product with an independent perspective  To identify as many defects as possible before the customer finds Performance Testing Process & Methodology 69 Proprietary & Confidential -  To reduce the risk of releasing the product Performance Testing Process & Methodology 70 - Proprietary & Confidential - Unit. The Testing strategy should define the objectives of all test stages and the techniques that apply. Any test support tools introduced should be aligned with. The test environment should also be under configuration control and test data and results stored for future evaluation.10 Test Strategy 10. It is the role of test management to ensure that new or modified service products meet the business requirements for which they have been developed or enhanced. and in support of. A detailed test plan and schedule is prepared with key test responsibilities being indicated. Test monitoring and assessment – ongoing monitoring and assessment of the integrity of the development and construction. the test strategy.1 Introduction This Document entails you towards the better insight of the Test Strategy and its methodology. Test specifications – required for all levels of testing and covering all categories of test. Test organization also involves the determination of configuration standards and the definition of the test environment. integration and system testing – configuration items are verified against the appropriate specifications and in accordance with the test plan. Test planning – the requirements definition and design specifications facilitate in the identification of major test items and these may necessitate the test strategy to be updated. The required outcome of each test must be known before the test is attempted. Test management is also concerned with both test resource and test environment management. Test Approach/Test Architecture are the acronyms for Test Strategy. 10. and facilitates communication of the test process and its implications outside of the test discipline. The testing strategy also forms the basis for the creation of a standardized documentation set. The status of the configuration items should be reviewed against the phase plans and test progress reports prepared providing some assurance of the verification and validation activities. Product assurance – the decision to negotiate the acceptance testing program and the release and commissioning of the service product is subject to the ‘product assurance’ role being satisfied with the outcome of the verification Performance Testing Process & Methodology 71 Proprietary & Confidential - .2 Key elements of Test Management: Test organization –the set-up and management of a suitable test organizational structure and explicit role definition. The project framework under which the testing activities will be carried out is reviewed. high level test phase plans prepared and resource schedules considered. Fitness for purpose checklist: • Is there a documented testing strategy that defines the objectives of all test stages and the techniques that may apply. Performance Testing Process & Methodology 72 Proprietary & Confidential - . e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organization must review its testing strategies. • risks requiring contingency measures? • Are test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs? For example. The Project Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed.3 Test Strategy Flow : Test Cases and Test Procedures should manifest Test Strategy. • the testing to be performed. non-functional testing and the associated techniques such as performance. • resource and facility requirements. Product assurance may oversee some of the test activity and may participate in process reviews. stress and security etc? • Does the test plan prescribe the approach to be taken for intended test activities. Traditionally the responsibility for testing and commissioning is buried deep within the supply chain as a sub-contract of a sub-contract. The time necessary for testing and commissioning will vary from project to project depending upon the complexity of the systems and services that have been installed. • reporting requirements.activities. e. • evaluation criteria. Testing and commissioning is often considered by teams as a secondary activity and given a lower priority particularly as pressure builds on the program towards completion. 10. • test schedules. identifying: • the items to be tested. A common criticism of construction programmers is that insufficient time is frequently allocated to the testing and commissioning of the building systems together with the involvement and subsequent training of the Facilities Management team. Sufficient time must be dedicated to testing and commissioning as ensuring the systems function correctly is fairly fundamental to the project’s success or failure. It is possible to gain greater control of this process and the associated risk through the use of specialists such as Systems Integration who can be appointed as part of the professional team.g. Performance Testing Process & Methodology 73 Proprietary & Confidential - .  Scenarios may be corrupted.  Create complex scenarios and compare them.  Simulate the Algorithm in parallel.  Determination of Actual Risk.  Review Documentation and Help.  Test for sensitivity to user Error.Test Strategy – Selection Selection of the Test Strategy is based on the following factors  Product Test Strategy based on the Application to help people and teams of people in making decisions.  Understand the underlying Algorithm.  Generate large number of decision scenarios.  Based on the Key Potential Risks  Suggestion of Wrong Ideas.  Capability test each major function.  People will use the Product Incorrectly  Incorrect comparison of scenarios.  Unable to handle Complex Decisions. This will be done using the GUI test Automation system or through the direct generation of Decide Right scenario files that would be loaded into the product during test. Analysis Coding Errors 36% and design Errors 64% Performance Testing Process & Methodology 74 Proprietary & Confidential - .Test Strategy Execution: Understand the decision Algorithm and generate the parallel decision analyzer using the Perl or Excel that will function as a reference for high volume testing of the app. and the design of the user interface and functionality for its sensitivity to user error.  Review the Documentation.  The difficulty of automating decision tests 10.4 General Testing Strategies • Top-down • Bottom-up • Thread testing • Stress testing • Back-to-back testing 10.  Test with decision scenarios that are near the limit of complexity allowed by the product  Compare complex scenarios. The system concerns on risks then establish the objectives for the test process.5 Need for Test Strategy The objective of testing is to reduce the risks inherent in computer systems.  Create a means to generate and apply large numbers of decision scenarios to the product.  Test the product for the risk of silent failures or corruptions in decision analysis. The two components of the testing strategy are the Test Factors and the Test Phase.  Issues in Execution of the Test Strategy  The difficulty of understanding and simulating the decision algorithm  The risk of coincidal failure of both the simulation and the product. The strategy must address the risks and present a process that can reduce those risks. 10. Not all the test factors will be applicable to all software systems. Four test steps must be followed to develop a customized test strategy. Test Factor – The risk or issue that needs to be addressed as part of the test strategy.6 Developing a Test Strategy The test Strategy will need to be customized for any specific software system. For example the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a Rapid Application Development methodology.  Place risks in the Matrix TestFactors\Te st Phase Requirements Dynamic Test Design Build Integrate Maintain Facto rs Performance Testing Process & Methodology 75 - Risks Proprietary & Confidential - . The test phase will vary based on the testing methodology used. The strategy will select those factors that need to be addressed in the testing of a specific application system.  Test Phase – The Phase of the systems development life cycle in which testing will occur. The applicable test factors would be listed as the phases in which the testing must occur. The development team will need to select and rank the test factors for the specific software systems being developed.  Select and rank Test Factors  Identify the System Developmental Phases  Identify the Business risks associated with the System under Development. 7 Conclusion: Test Strategy should be developed in accordance with the business risks associated with the software when the test team develop the test tactics.10. Performance Testing Process & Methodology 76 - Proprietary & Confidential - . The system accordingly focuses on risks thereby establishes the objectives for the test process. Thus the Test team needs to acquire and study the test strategy that should question the following:  What is the relationship of importance among the test factors?  Which of the high level risks are the most significant?  What damage can be done to the business if the software fails to perform correctly?  What damage can be done to the business if the business if the software is not completed on time?  Who are the individuals most knowledgeable in understanding the impact of the identified business risks? Hence the Test Strategy must address the risks and present a process that can reduce those risks. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. the features to be tested. who will do each task. and any risks requiring contingency planning. Schedules / Milestones 9. Sign-Off Performance Testing Process & Methodology 77 Proprietary & Confidential - . Scope 3. Resources 6. It identifies test items. Deliverables 13.11 TEST PLAN 11. Hardware / Software Requirements 10. deadlines and deliverables for the project. Tools to be used 12. Purpose of preparing a Test Plan A Test Plan is a useful way to think through the efforts needed to validate the acceptability of a software product. the testing tasks. Standards/Guidelines 14. The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope.1 What is a Test Plan? A Test Plan can be defined as a document that describes the scope. Test Approach 4. responsibilities. Contents of a Test Plan 1. Procedures b. It is in this respect that reviews and a signoff are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers). Tasks / Responsibilities 7. Exit Criteria 8. Purpose 2. References a. approach. Annexure 15. Risks & Mitigation Plans 11. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. Templates c. Entry Criteria 5. resources and schedule of intended test activities. connectivity related issues etc. Test Approach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management). Risks & Mitigation Plans Performance Testing Process & Methodology 78 Proprietary & Confidential - .e.) pre-requisites.11. Tasks / Responsibilities This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project. successful implementation of the latest build etc. mainframe processes etc). Hardware / Software Requirements This section would contain the details of PC’s / servers required (with the configuration) to install the application or perform the testing. Exit criteria Contains tasks like bringing down the system / server. starting the web server / app server. Schedules / Milestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the project.2 Contents (in detail) Purpose This section should contain the purpose of preparing the test plan Scope This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens. database refresh etc. database. restoring system to pre-test environment. Entry Criteria This section explains the various steps to be performed before the start of a test (i. Resources This section should list out the people who would be involved in the project and their designation etc. For example: Timely environment set up. specific software that needs to be installed on the systems to get the application running or to connect to the database. ) templates used for reports. Test Director. Referenced documents can also be attached here.e. Deliverables This section contains the various deliverables that are due to the client at various points of time (i. end of the project etc.g.) QView Project related documents (RSD. Test Scripts etc. PCOM.g.) WinRunner. Performance Testing Process & Methodology 79 - Proprietary & Confidential - . weekly. Templates for all these could also be attached. FSD etc) Annexure This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.) daily. Sign-Off This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality. References Procedures Templates (Client Specific or otherwise) Standards / Guidelines (e. test cases etc.g. WinSQL. ADD. Test Procedure. These could include Test Plans. Tools to be used This would list out the testing tools or utilities (if any) that are to be used in the project (e. Status Reports. Test Matrices. start of the project. the way the test is going to be run and applied. Effective quality control testing requires some basic goals and understanding: You must understand what you are testing. Each separate test should be given a unique reference number which will identify the Business Process being recorded.12 Test Data Preparation . which finally spews forth yet more data to be checked against expectations. Preparation of the data can help to focus the business where requirements are vague. forms the input. etc. This paper sets out to illustrate some of the ways that data can influence the test process. It is recommended that a full test environment be set up for use in the applicable circumstances. This will enable the monitoring and testing reports to be co-coordinated with any feedback received. Functional testing can suffer if data is poor. is the medium through which the tester influences the software. how the protocols behave. You must understand the limits inherent in the tests themselves. the simulated conditions used. Tests must be planned and thought out a head of time. can reduce maintenance effort and allow flexibility. Good test data can be structured to improve understanding and testability. Test data should however. be prepared which is representative of normal business transactions. Its contents. Data is a crucial part of most functional testing. You should have a definition of what success and failure are.Introduction A System is programmed by its data. and will show that testing can be improved by a careful choice of input Performance Testing Process & Methodology 80 Proprietary & Confidential - . Data is manipulated. In other words. performing a specific set of tests at appropriate points in the process is more important than running the tests at a specific time. and good data can help improve functional testing. what steps are required. You must have a consistent schedule for testing. summarized and referenced by the functionality under test. etc. is close enough good enough? You should have a good idea of a methodology for the test. the persons involved in the testing process and the date the test was carried out. Roles of Data in Functional Testing Testing consumes and produces large amounts of data. Testing is the process of creating. implementing and evaluating tests. Actual customer names or contact details should also not be used for such tests. The first stage of any recogniser development project is data preparation. if you're testing a specific functionality. you must know how it's supposed to work. you have to decide such things as what exactly you are testing and testing for. extrapolated. you should design test cases. the more formal a plan the better. correctly chosen. Data describes the initial conditions for a test. A SYSTEM IS PROGRAMMED BY ITS DATA Many modern systems allow tremendous flexibility in the way their basic functionality can be used. that take longer to execute. Regression testing and automated test maintenance can be made speedier and easier by using good data. work (almost) seamlessly with a variety of cooperative systems and provide tailored experiences to a host of different users.data. as input data has the greatest influence on functional testing and is the simplest to manipulate. They may obscure problems or avoid them altogether. whether they are good or bad. presentation and user interface. GOOD DATA CAN HELP TESTING STAY ON SCHEDULE An easily comprehensible and well-understood dataset is a tool to help communication. Poor data tends to result in poor tests. rather than output data or the transitional states the data passes through during processing. Configuration data can dictate control flow. or require lengthy and difficult setup. data manipulation. FUNCTIONAL TESTING SUFFERS IF DATA IS POOR Tests with poor data may not describe the business model effectively. A business may look to an application's configurability to allow them to keep up with the market without being slowed by the development process. while an elegantly-chosen dataset can often allow new tests without the overhead of new data. In doing this. those which use databases or are heavily influenced by the data they hold. The paper will focus on input data. for example: project and quality assumptions project background information resources schedule & timeline entry and exit criteria Performance Testing Process & Methodology 81 Proprietary & Confidential - . Without this. it is hard to communicate problems to coders. GOOD DATA IS VITAL TO RELIABLE TEST RESULTS An important goal of functional testing is to allow the test to be repeated with the same result. such as operational profiles. and allows tests to be repeated with confidence. massive datasets and environmental tuning. A system can be configured to fit several business models. effective reporting. A formal test plan is a document that provides and records important information about a test project. they may be hard to maintain. and it can become difficult to have confidence in the QA team's results.. and varied to allow diagnosis. The paper will not consider areas where data is important to non-functional testing. Good data allows diagnosis. Good data can greatly assist in speedy diagnosis and rapid retesting. an individual may look for a personalized experience from commonly-available software. the paper will concentrate most on data-heavy applications. The 'Preparing for a Possible Emergency' Phase of the BCP process will involve the identification and implementation of strategies for back up and recovery of data files or a part of a business process. the tests should be independently monitored.  Identify Who is to Conduct the Tests In order to ensure consistency of the testing process throughout the organization. Each business process should be thoroughly tested and the coordinator should ensure that each business unit observes the necessary rules associated with ensuring that the testing process is carried out within a realistic environment. It should also list the duties of the appointed co-ordinators.test milestones tests to be performed use cases and/or test cases 12. one or more members of the Business Continuity Planning (BCP) Team should be nominated to co-ordinate the testing process within each business unit. This section of the BCP should contain the names of the BCP Team members nominated to co-ordinate the testing process. This section of the BCP will contain the names of the persons nominated to monitor the testing process throughout the organization.  Identify Who is to Control and Monitor the Tests In order to ensure consistency when measuring the results. Completion of feedback forms should be mandatory for all persons participating in the testing process. It will also contain a list of the duties to be undertaken by the monitoring staff. It is inevitable Performance Testing Process & Methodology 82 Proprietary & Confidential - . Prepare Budget for Testing Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. The forms should be completed either during the tests (to record a specific issue) or as soon after finishing as practical. This task would normally be carried out by a nominated member of the Business Recovery Team or a member of the Business Continuity Planning Team. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This feedback will hopefully enable weaknesses within the Business Recovery Process to be identified and eliminated. a nominated testing and across the organization.  Prepare Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the tests.1 Criteria for Test Data Collection This section of the Document specifies the description of the test data needed to test recovery of each business process. This section of the BCP should contain a template for a Feedback Questionnaire. in a realistic manner. Test each part of the Business Recovery Process In so far as it is practical. Performance Testing Process & Methodology 83 Proprietary & Confidential - . This is probably best handled in a workshop environment and should be presented by the persons responsible for developing the emergency procedures. each critical part of the business recovery process should be fully tested. This activity will usually be handled by the HRM Department or Division. it is necessary for the core testing team to be trained in the emergency procedures. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. This section of the BCP will contain a list of the testing phase activities and a cost for each. Conducting the Tests The tests must be carried out under authentic conditions and all participants must take the process seriously. This is particularly important for management and key employees who are critical to the success of the recovery process. It should be mandatory for the management of a business unit to be present when that unit is involved with conducting the tests. Test Accuracy of Employee and Vendor Emergency Contact Numbers During the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. Where the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. All contact numbers are to be validated for all involved employees. This section of the BCP is to contain a list of each business process with a test schedule and information on the simulated conditions being used.  Training Core Testing Team for each Business Unit In order for the testing process to proceed smoothly. Critical parts of the business process such as the IT systems. This section of the BCP should contain a list of the core testing team for each of the business units who will be responsible for coordinating and undertaking the Business Recovery Testing process. It is important that all persons who are likely to be involved with recovering a particular business process in the event of an emergency should participate in the testing process.that these back up and recovery processes will involve additional costs. It is important that clear instructions are given to the Core Testing Team regarding the simulated conditions which have to be observed. may require particularly expensive back up strategies to be implemented. Every part of the procedures included as part of the recovery process is to be tested to ensure validity and relevance. The testing coordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process. Managing the Training Process For the BCP training phase to be successful it has to be both well managed and structured. This training may be integrated with the training phase or handled separately. It will be necessary to identify the objective and scope for the training. This will enable alternative contact routes to be used. and the training fine tuned.if not.if not. who needs it and a budget prepared for the additional costs associated with this phase. The training should be assessed to verify that it has achieved its objectives and is relevant for the procedures involved. as appropriate. a hierarchical process could be used whereby one person contacts five others. The training should be carefully planned and delivered on a structured basis.if not. a large number of persons are to be contacted. Training Staff in the Business Recovery Process All staff should be trained in the business recovery process. what specific training is required. in the event of an emergency occurring outside of normal business hours. provide further comment What were the main comments received in the feedback questionnaires Each test should be assessed as either fully satisfactory.Where. Training may be delivered either using in-house resources or external resources depending upon available skills and related costs. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. This is particularly important when the procedures are significantly different from those pertaining to normal operations. provide further comment Were simulated conditions reasonably "authentic" . adequate or requiring further testing. Develop Objectives and Scope of Training The objectives and scope of the BCP training activities are to be clearly stated within the plan. This will enable the training to be consistent and organized in a manner where the results can be measured. Performance Testing Process & Methodology 84 Proprietary & Confidential - . provide further comment Was test data representative . The BCP should contain a description of the objectives and scope of the training phase.if not. The following questions may be appropriate: Were objectives of the Business Recovery Process and the testing process met . provide further comment Did the tests proceed without any problems . Assess Test Results Prepare a full assessment of the test results for each business process. This section of the BCP will identify for each business process what type of training is required and which persons or group of persons need to be trained. an estimate of resources and an estimate of the completion date. This section of the BCP contains a draft communication to be sent to each member of staff to advise them about their training schedule. Training Needs Assessment The plan must specify which person or group of persons requires which type of training. The training will cover all aspects of the Business Recovery activities section of the BCP including IT systems recovery". Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. This can be a time consuming task and unless priorities are given to critical training programmes. it could delay the organization in reaching an adequate level of preparedness. For example it may be necessary to carry out some process manually if the IT system is down for any length of time.The objectives for the training could be as follows : "To train all staff in the particular procedures to be followed during the business recovery process". The scope of the training could be along the following lines : "The training is to be carried out in a comprehensive and exhaustive manner so that staff become familiar with all aspects of the recovery process. These manual procedures must be fully understood by the persons who are required to carry them out. It is necessary for all new or revised processes to be explained carefully to the staff. For larger organizations it may be practical to carry out the training in a classroom environment. Training Materials Development Schedule Once the training needs have been identified it is necessary to specify and develop suitable training materials. it is necessary to advise them about the training programmes they are scheduled to attend. This section of the BCP contains the overview of the training schedule and the groups of persons receiving the training. Performance Testing Process & Methodology 85 Proprietary & Confidential - . for smaller organizations the training may be better handled in a workshop style. however. Prepare Training Schedule Once it has been agreed who requires training and the training materials have been prepared a detailed training schedule should be drawn up. This section of the BCP contains information on each of the training programmes with details of the training materials to be developed. Communication to Staff Once the training is arranged to be delivered to the employees. The communication should provide for feedback from the staff member where the training dates given are inconvenient. This information will be gathered from the trainers and also the trainees through the completion of feedback questionnaires. or the process. This section of the BCP will contain a format for assessing the training feedback. However. The key issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. This feedback will enable weaknesses within the Business Recovery Process. Each member of staff will be given information on their role and responsibilities applicable in the event of an emergency. The forms should be completed either during the training (to record a specific issue) or as soon after finishing as practical. If there are a significant number of negative issues raised then consideration should be given to possible re-training once the training materials. Identified weaknesses should be notified to the BCP Team Leader and the process strengthened accordingly. Feedback Questionnaires Assess Feedback Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the training programmes. it has to be recognized that. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. the training costs will vary greatly. This section of the BCP should contain a template for a Feedback Questionnaire for the training phase. or the training. Assess Feedback The completed questionnaires from the trainees plus the feedback from the trainers should be assessed. to be identified and eliminated. This section of the BCP will contain a list of the training phase activities and a cost for each.A separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff. Prepare Budget for Training Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. Keeping the Plan Up-to-date Performance Testing Process & Methodology 86 Proprietary & Confidential - . have been improved. It should be noted whenever part of the costs is already incorporated with the organization’s overall budgeting process. however well justified. training incurs additional costs and these should be approved by the appropriate authority within the organization. Completion of feedback forms should be mandatory for all persons participating in the training process. Depending upon the cross charging system employed by the organization. Assessing the Training The individual BCP training programmes and the overall BCP training process should be assessed to ensure its effectiveness and applicability. This section of the BCP contains a draft communication from the BCP Coordinator to affected business units and contains information about the changes which require testing or re-testing. and particularly within the last five. Products and services change and also their method of delivery. It is important that the relevant BCP coordinator and the Business Recovery Team are kept fully informed regarding any approved changes to the plan. These changes are likely to continue and probably the only certainty is that the pace of change will continue to increase. Maintaining the BCP It is necessary for the BCP updating process to be properly structured and controlled. HRM Department will be responsible to ensure that all emergency contact numbers for staff are kept up to date. Whenever changes are made to the BCP they are to be fully tested and appropriate amendments should be made to the training materials. The BCP Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. Change Controls for Updating the Plan It is recommended that formal change controls are implemented to cover any changes required to the BCP. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. The BCP Team Leader will remain in overall control of the BCP but business unit heads will need to keep their own sections of the BCP up to date at all times. The increase in technological based processes over the past ten years. Whenever changes are made or proposed to the BCP.Changes to most organizations occur all the time. A Change request Form / Change Order form is to be prepared and approved in respect of each proposed change to the BCP. Similarly. Performance Testing Process & Methodology 87 Proprietary & Confidential - . It is necessary for the BCP to keep pace with these changes in order for it to be of use in the event of a disruptive emergency. Test All Changes to Plan The BCP Team will nominate one or more persons who will be responsible for co-ordinating all the testing processes and for ensuring that all changes to the plan are properly tested. the BCP Testing Co-ordinator will be notified. This will involve the use of formalized change control procedures under the control of the BCP Team Leader. Responsibilities for Maintenance of Each Part of the Plan Each part of the plan will be allocated to a member of the BCP Team or a Senior Manager with the organization who will be charged with responsibility for updating and maintaining the plan. This section of the BCP will contain a Change Request Form / Change Order to be used for all such changes to the BCP. This is necessary due to the level of complexity contained within the BCP. If the datasets are poorly constructed. If that data is itself hard to understand or manipulate. the cost of test maintenance is correspondingly increased. Degradation of test data over time. Most reports make reference to the input data and the actual and expected results. Increased test maintenance cost If each test has its own data. Reduced flexibility in test execution If datasets are large or hard to set up. or indeed of missing the failure entirely. Obscure results and bug reports Without clearly comprehensible data. the cost increases further. but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. Furthermore. Unreliable test results. Performance Testing Process & Methodology 88 Proprietary & Confidential - . The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. as the data is restored. it may not be time-effective to construct further data to support investigatory tests. An assessment should be made on whether the change necessitates any re-training activities. Poor data can make these reports hard to understand.recognizing them early can allow their effects to be mitigated. Problems which can be caused by Poor Test Data Most testers are familiar with the problems that can be caused by poor data. The following list details the most common problems familiar to the author. or of a failure to recognize all the data that is influential on the system. The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. some tests may be excluded from a test run. they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. Running the same test twice produces inconsistent results. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). unrecognized database corruption. Program faults can introduce inconsistency or corruption into a database.Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). This can be a symptom of an uncontrolled environment. evidence of the fault is lost. An assessment should be made on whether the change necessitates any re-training activities. Restoring the data to a clean set gets rid of the symptom. If not spotted at the time of generation. Most projects experience these problems at some stage . testers stand a greater chance of missing important diagnostic features of a failure. Requirements problems can be hidden in inadequate data It is important to consider inputs and outputs of a process for requirements modeling. they can influence and corrupt each others results as they change the data in the system. after further analysis. Less time spent hunting bugs The more time spent doing unproductive testing or ineffective test maintenance. often don't reflect the way the system will be used in practice. Unwieldy volumes of data Small datasets can be manipulated more easily than large datasets. Confusion between developers. Poor data will cause more of these problems. Business data not representatively tested Test requirements. or tests. not to be faults at all. a complex dataset will positively hinder diagnosis.Larger proportion of problems can be traced to poor data A proportion of all failures logged will be found. Performance Testing Process & Methodology 89 - Proprietary & Confidential - . the less time spent testing. share the same dataset. This can not only cause false results. Simpler to make test mistakes Everybody makes mistakes. A readily understandable dataset can allow straightforward diagnosis. particularly in configuration data. Data can play a significant role in these failures. A failure to understand each others data can lead to ongoing confusion. but can lead to database integrity problems and data corruption. testers and business Each of these groups has different data requirements. Inadequate data can lead to ambiguous or incomplete requirements. A few datasets are easier to manage than many datasets. While this may arguably lead to broad testing for a variety of purposes. it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. This can make portions of the application untestable for many testers simultaneously. and may lend themselves to automated testing / sanity checks. Inability to spot data corruption caused by bugs A few well-known datasets can be more easily be checked than a large number of complex datasets. Poor database/environment integrity If a large number of testers. Confusing or over-large datasets can make data selection mistakes more common. It includes communications addresses. but its state can be inferred from actions that the system has taken. test handles and instrumentation make it output data). products. and can be seen as part of the test conditions. Transitional data Transitional data is data that exists only within the program. it is useful to be able to classify the data according to the way it is used. Setup data Setup data tells the system about the business rules. Typically. Although it is perhaps simpler to discuss data in these terms. CONSUMABLE INPUT DATA Consumable input data forms the test input It can also be helpful to qualify data after the system has started to use it. The current date and time can be seen as environmental data. where new billing products are supported and indeed created by additions to the setup data. For the purposes of testing. it is useful to split the categorization once more: FIXED INPUT DATA Fixed input data is available before the start of the test. setup data causes different functionality to apply to otherwise similar data. Typically held in internal system variables. Transitional data is not seen outside the system (arguably. Environmental data Environmental data tells the system about its technical environment. during processing of input data. It might include a cross reference between country and delivery cost or method.as can be seen in the mobile phone industry. The following broad categories allow data to be handled and discussed more easily. It Performance Testing Process & Methodology 90 Proprietary & Confidential - . actions. it is temporary and is lost at the end of processing. With an effective approach to setup data.12. Input data Input data is the information input by day-to-day system functions. orders. or methods of debt collection from different kinds of customers. directory trees and paths and environmental variables. documents can all be input data. many references are made to "The Data" or "Data Problems".2 Classification of Test Data Types In the process of testing a system. Output data Output data is all the data that a system outputs as a result of processing input data and events. Accounts. business can offer new intangible products without developing new functionality . Jackson's Structured Programming methodology). generating tests so that all possible permutations of inputs are tested. Good data assists testing. Pair wise. it does not directly influence the quality of the tests. The same techniques can be applied to test data. the test data can contain all possible pairs of permutations in a far smaller set than that which contains all possible permutations.3 Organizing the data A key part of any approach to data is the way the data is organized.generally has a correspondence with the input data (cf. this method of working with fixed input data can help greatly in testing the setup data.which also allows a wide range of tests. Finally. and easy to manipulate dataset is capable of supporting many tests. or combinatorial testing addresses this problem by generating a set of tests that allow all possible pairs of combinations to be tested. reports and database updates. but the data maintenance required will be greatly lessened by the small size of the dataset and the amount of reuse it allows. Most are also familiar with the ways in which this generally vast set can be cut down. A subset of the output data is generally compared with the expected results at the end of test execution. 12. the way it is chosen and described. transmissions. Permutations Most testers are familiar with the concept of permutation. This small. this produces a far smaller set of tests than the brute-force approach for all permutations. As such. A good approach increases data reliability. for nontrivial sets. influenced by the uses that are planned for it. Database changes will affect it. and includes not only files. ad-hoc. This method is most appropriate when used. or diagnostic tests. but can also include test measurements. on fixed input data. It is most effective when the following conditions are satisfied. rather than hinders it. reduces data maintenance time and can help improve the test process. Typically. Fortunately. and so is comprehensive enough to allow a great many new. these criteria apply to many traditional database-based systems:  ixed input data consists of many rows F Fields are independent Performance Testing Process & Methodology 91 Proprietary & Confidential - . It allows complete pairwise coverage. as above. This allows a small. easy to handle dataset . but a common requirement is that of exclusive use.particularly setup data Partitioning Partitions allow data access to be controlled. usability tests etc. setup data . although this partitioning can introduce configuration management problems in software version. Used at tester's own risk! Testing rarely has the luxury of completely separate environments for each test and each tester. data use in one area will have no effect on the results of tests in another. Controlling data. so the area can be trusted. No test changes the data. Partitions can be used independently. While the impact of this requirement should not be underestimated. Data can be safely and effectively partitioned by machine / database / application instance. permutation helps because: Permutation is familiar from test planning. A useful and basic way to start with partitions is to set up. in a system can be fraught. Many different stakeholders have different requirements of the data. Used by one test/tester at a time.You want to do many tests without loading / you do not load fixed input data for each test. Data must be reset or reloaded after testing. so allowing different kinds of data use. not a single environment for each test or tester. machine setup. Existing data cannot be trusted. Achieves good test coverage without having to construct massive datasets Can perform investigative testing without having to set up more data Reduces the impact of functional/database changes Can be used to test other data . environmental data and data load/reload. To sum up. and to a lesser extent. and the access to data. reducing uncontrolled changes in the data. These three have the following characteristics: Safe area Used for enquiry tests. Scratch area Used for investigative update tests and those which have unusual requirements. Many testers can use simultaneously Change area Used for tests which update/change data. a number of stakeholders may be able to work with the same environmental data.and their work may not need to change the Performance Testing Process & Methodology 92 Proprietary & Confidential - . but to set up three shared by many users. However. This allows shorthand.that is to say. Although testers are able to interfere with each others tests. Data partitions help because: Allow controlled and reliable data. Use of free text fields with some correspondence to the internals of the record allows output to be checked more easily. data extracts and sanity checks can also make use of these. the team can be educated to avoid each others work. and the scratch area Bristol addresses. Setting this data. Giving some meaning to the data that can be referred to directly can help with improving mutual understanding. they give them names. early on in testing. allowing the use of 'soft' partitions. Data is often used to communicate and illustrate problems to coders and to the business. to have some meaningful value can be very useful. Testers often talk about items of data. tester 1's tests may only use customers with Russian nationality and tester 2's tests only with French. Reports. allowing testers to sense check input and output data. but also acts as jargon. Clarity helps because: Improves communication within and outside the team Reduces test errors caused by using the wrong data Allows another method way of doing sanity checks for corrupted or inconsistent data Helps when checking data after input Performance Testing Process & Methodology 93 Proprietary & Confidential - . for instance. and actions based on fields which tend not to be directly displayed. If. 'Soft' partitions allow the data to be split up conceptually. Typically. values in free-text fields are used for soft partitioning. referring to them by anthropomorphic personification . and choose appropriate input data for investigative tests. excluding those who are not in the know. rather than physically. allowing testers to make a simple comparison between the free text (which is generally displayed on output). the two sets of work can operate independently in the same dataset. there is generally no mandate for outside groups to understand the format or requirements of test data. A safe area could consist of London addresses. the change area Manchester addresses. sorting or selecting on a free text field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data. The test strategy can take advantage of this by disciplined use of text / value fields.environmental or setup data. reducing data corruption / change problems Can reduce the need for exclusive access to environments/machines Clarity Permutation techniques may make data easier to grasp by making the datasets small and commonly used. but we can make our data clearer still by describing each row in its own free text fields. It can also be appropriate in environments where data Performance Testing Process & Methodology 94 Proprietary & Confidential - . particularly for minor system upgrades. In some cases. If the system is working well. and that the large volume of data may make test results hard to interpret. they can be the only way to get broken data into the system in a consistent fashion. or data entry can be automated by using a capture/replay tool. Using the system you're trying to test The data can be manually entered. and internally assigned keys are likely to be effective and consistent. It uses the system's own validation and insertion methods.Helps in selecting data for investigative tests 12. but stripped of personal details for privacy reasons. This method can be very slow for large datasets. Data loaded can have a range of origins.4 Data Load and Data Maintenance An important consideration in preparing data for functional testing is the ways in which the data can be loaded into the system. Not loaded at all Some tests simply take whatever is in the system and try to test with it. selected for testing. This data may be complete and well specified. filtered for relevance and duplicates and migrated to the target data format. and can both be hampered by faults in the system. and help pinpoint them. Loading the data Data can be loaded into a test system in three general ways. A common compromise is to use old data from an existing system. which is unlikely to gain the advantages of good data listed above. As they do not use the system to load the data. or constructed and held in flat files. However. Data can be well-described in test scripts. however. This can be appropriate where a dataset is known and consistent. While this last method may seem complete. all new data is created for testing. Using a data load tool Data load tools directly manipulate the system's underlying data structures. the complete set of live data is loaded into the system. and the possibility and ease of maintenance. they may come up against problems when generating internal keys. but can be hard to generate. As they do not use the system's own validation. be input in an ad-hoc way. It may. data integrity can be ensured by using this method. they can provide a convenient workaround to known faults in the system's data load routines. it has disadvantages in that the data may not fully support testing. In some cases. or has been set up by a prior round of testing. and can have problems with data integrity and parent/child relationships. Environmental data tends to be manually loaded. and the wide variety of possible methods will not be discussed further here. The setup data should be organized to allow a good variety of scenarios to be considered The setup data needs to be able to be loaded and maintained easily and repeatable The business needs to become involved in the data so that their setup for live can be properly tested When testing the setup data. throughout testing. or delete existing data first.5 Testing the Data A theme bought out at the start of this paper was 'A System is Programmed by its Data'. and is not often desirable. overwrite existing data. In order to test the system. Environmental data is often checked manually on the live system during implementation and rollout.cannot be reloaded. there is little point in testing their values on a system other than the target system. Fixed input data may be generated or migrated and is loaded using any and all of the methods above. Environmental data is necessarily different between the test and live environment. it can append itself to existing data. and due consideration should be given to the consequences. Large volumes of setup data can often be generated from existing datasets and loaded using a data load tool. such as the live system. it is important to have a well-known set of fixed input data and Performance Testing Process & Methodology 95 Proprietary & Confidential - . Aspects of all the elements above come into play. one must also test the data it is configured with. the environmental and setup data. However. Although testing can verify that the environmental variables are being read and used correctly. Does the planned/current setup data induce the functionality that the business requires? Will changes made to the setup data have the desired effect? Testing for these two questions only becomes possible when that data is controlled. it can be symptomatic of an uncontrolled approach to data. as the business environment changes – particularly if there is a long period between requirements gathering and live rollout. 12. Setup data can change often. Testing done on the setup data needs to cover two questions. Each is appropriate in different circumstances. When data is loaded. either at installation or by manipulating environmental or configuration scripts. while small volumes of setup data often have an associated system maintenance function and can be input using the system. while consumable input data is typically listed in test scripts or generated as an input to automation tools. Common data problems can be avoided or reduced with preparation and automation. The advantages of testing the setup data include: Overall testing will be improved if the quality of the setup data improves Problems due to faults in the live setup data will be reduced The business can re-configure the software for new business needs with increased confidence Data-related failures in the live system can be assessed in the light of good data testing 12. This allows the effects of changes made to the setup data to be assessed repeat ably and allows results to be compared. The following points summarize the actions that can influence the quality of the data and the effectiveness of its usage:  Plan the data for maintenance and flexibility  Know your data. and make its structure and content transparent  Use the data to improve understanding throughout testing and the business  Test setup data as you would test functionality Performance Testing Process & Methodology 96 - Proprietary & Confidential - . and good data can be used as a tool to enable and improve communication throughout the project.consumable input data. Well-planned data can allow flexibility and help reduce the cost of test maintenance. Effective testing of setup data is a necessary part of system testing.6 Conclusion Data can be influential on the quality of testing. 1 Factors defining the Test Log Generation Document Deviation: Problem statements begin to emerge by process of comparision. Identification of the cause is the necessary as a basis for corrective action. Effect: Tells why the difference between what is and what should be is significant Cause: Tells the reasons for the deviation. no finding exists. the I/S professional will need to ensure that the information is accurate. These concepts are the first two and the most basic .When one or more these attributes is missing. as they currently exist. The actual deviation will be the difference or gap between “what –is” and “ what is desired”. The statement of condition is uncovering and documenting the facts. the first essential step toward development of a problem statement has occurred. The “What should be” shall be called the “Criteria”. These two attributes are the basis for a finding. It is difficult to visualize any type of problem that is not in some way characterized by this deviation. and worded as clearly and precisely as possible. making up the statement of condition. The ‘What is”: can be called the statement of condition. The documenting of the deviation is describing the conditions. attributes of a problem statement.Introduction Test Problem is a condition that exists within the software system that needs to be addressed. When a deviation is identified between what is found to actually exist and what the user thinks is correct or proper . questions almost arise. Performance Testing Process & Methodology 97 Proprietary & Confidential - . A well developed problem statement will include each of these atttributes. –Tells what it is. If a comparison between the two gives little or no practical consequence. which represents what the user desires. The following four attributes should be developed for all the test problems: Statement of condition.13 Test Logs . well supported. What is a fact? The statement of condition will of course depend on the nature and extent of the evidence or support that is examined and noted.Essentially the user compares” what is” with “what should be”. Carefully and completely documenting a test problem is the first step in correcting the problem. For those facts. and the criteria. such as Criteria: Why is the current state inadequate? Effect: How significant is it? Cause: What could have cause of the problem? 13. Criteria – Tells what should be. as they exist. the alternate action should be listed or the reason for not following the recommended action should be documented. If the testers re unable to do this . Users/Customers served –The organization . Recommended Action: The testers should indicate any recommended action they believe would be helpful to the project team. Work Paper to describe the problem. Problem Description: Write a brief narrative description of the variance uncovered from expectations Statement of Conditions: Put the results of actual processing that occurred here. the work paper will be given to the development team and they should indicate the cause of the problem. Location of the Problem: The Tests should document where problem occurred as closely as possible. – The specific step-by –step activities that are utilized in producing the output from the identical activities. If not approved. if known. For example the following Work paper provides the information for Test Log Documentation: Field Requirements: Field Instructions for Entering Data Name of Software Tested : Put the name of the S/W or subsystem tested. Outputs /Deliverables – The products that are produced from the activity.The specific business or administered activities that are being performed during Test Log generation are as follows: Procedures used to perform work. Statement of Criteria: Put what testers believe was the expected result from processing Effect of Deviation: If this can be estimated . For example . Inputs .or class users/customers serviced by this activity.individuvals.The statement of condition should document as many of the following attributes as appropriate of the problem. Activities Involved:. Name of the S/W tested: Performance Testing Process & Methodology 98 Proprietary & Confidential - . it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. It can be stated in the either negative or positive terms. and document the statement of condition and the statement of criteria. The Criterion is the user’s statement of what is desired.The triggers.events.or documents that cause this activity to be executed. testers should indicate what they believe the impact or effect of the problem will be on computer processing Cause of Problem: The testers should indicate what they believe is the cause of the problem. Deficiencies noted – The status of the results of executing this activity and any appropriate interpretation of those facts. This description includes but not limited to : Data the defect uncovered Name of the Defect Location of the Defect Severity of the Defect Type of Defect How the defect was uncovered(Test Data/Test Script) Performance Testing Process & Methodology 99 Proprietary & Confidential - .The smallest identifiable software components Platform. Test factors -The factors incorporated in the plan. which will be based on software requirements. and Test Events These are the test products produced by the test team to perform testing. Reviews: Verification that the process deliverables / phases are meeting the user’s true needs.2 Collecting Status Data Four categories of data will be collected during testing. Test Transactions. Business objective –The validation that specific business objectives have been met.Problem Description Statement of Condition Statement of Criteria Effect of Deviation Cause of a Problem Location of the Problem Recommended Action 13. Test Results Data This data will include.The hardware and Software environment in which the software system will operate. Test Suites. These are explained in the following paragraphs. the validation of which becomes the Test Objective. Units. Defect This category includes a Description of the individual defects uncovered during the testing process. Interface Objectives-Validation that data/Objects can be correctly passed among Software components. Test transactions/events: The type of tests that will be conducted during the execution of tests. Inspections – A verification of process deliverables against deliverable specifications. Functions/Sub functions-Identifiable Software components normally associated with the requirements of the software. Storing Data Collected during Testing It is recommended that a database be established in which to store the results collected during testing. The test reports are for use of testers. and the results of testing at any point of time. test managers. and when it was entered for retest. but not performed 2=Test currently being performed 3=MINOR DEFECT NOTED 4=Major defect noted 5=Test complete and function is defect free for the criteria included in this test TEST FUNCTION 1 2 3 4 5 6 7 8 9 A B C D Performance Testing Process & Methodology 100 Proprietary & Confidential - . which indicates the project component for which the status is requested. As described the most common test Report is a simple Spread sheet . the test that will be performed to determine the status of that component. Use of Function/Test matrix: This shows which tests must be performed in order to validate the functions and also used to determine the status of testing. Many organizations use spreadsheet package to maintain test results. The intersection can be coded with a number or symbol to indicate the following: 1=Test is needed. It is also suggested that the database be put in online through client/server systems so that with a vested interest in the status of the project can be readily accessed for the status update. The frequency of the test reports should be based on the discretion of the team and extensiveness of the test process.The Test Logs should add to this information in the form of where the defect originated . Developing Test Status Reports Report Software Status Establish a Measurement Team Inventory Existing Project Measures Develop a Consistent Set of Project metrics Define Process Requirements Develop and Implement the Process Monitor the Process The Test process should produce a continuous series of reports that describe the status of testing. when it was corrected. and the software development team. Word –Processing: One way of increasing the utility of computers and word processors for the teaching of writing may be to use software that will guide the processes of generating. From the LaTeX2e and DocBook output files you can in turn produce PDF. Status Report Word Processing Tests or Keypad Tests Basic Skills Tests or Data Entry Tests Progress Graph Game Scores Test Report for each test Performance Testing Process & Methodology 101 Proprietary & Confidential - . Reports can be viewed and printed from the application or output as HTML.E Function Test Matrix 13. HTML.Use of word processing. HTML or any other kind of ASCII based output format can be produced just as easily. Some Database test tools like Data Vision is a database reporting tool similar to Crystal Reports. or tab. composing and revising text.'' GRG . PostScript. you can quickly scan through any number of these reports and see how each person's history compares. delimited ASCII text file or a SQL query to a RDBMS and produces a report listing. troff. and data base management products. and more. Individual Reports include all of the following information.2. database. PostScript. Some query tools available for Linux-based databases include: QMySQL dbMetrix PgAccess Cognos Powerhouse This is not yet available for Linux. however. The program was loosely designed to produce TeX/LaTeX formatted output.or comma-separated text files. defect tracking. text. XML. and graphic tools to prepare test reports. email editors. DocBook.1 Methods of Test Reporting Reporting Tools . LaTeX2e. order entry systems. This allows each person to use the normal functions of the computer keyboard that are common to all word processors. Cognos is looking into what interest people have in the product to assess what their strategy should be with respect to the Linux ``market. but plain ASCII text. A one-page summary report may be printed with either the Report Manager program or from the individual keyboard or keypad software at any time. organizing.GNU Report Generator The GRG program reads record and field information from a dBase3+ file. From the Report Manager. tester.Defining the components that should be included in a test report. business analysts and Client can participate and contribute to the testing process  Traceability throughout the testing process  Test Cases can be mapped to requirements providing adequate visibility over the test coverage of requirements  Test Director links requirements to test cases and test cases to defects  Manages Both Manual and Automated Testing  Test Director can manage both manual and automated tests (Win Runner)  Scheduling of automated tests can be effectively done using Test Director Test Report Standards . run schedules. Performance Testing Process & Methodology 102 Proprietary & Confidential - . Testing Data used for metrics Testers are typically responsible for reporting their test status at regular intervals.Test Director:  Facilitates consistent and repetitive testing process  Central repository for all testing assets facilitates the adoption of a more consistent testing process. defect statistics can be used for production planning  Provides Anytime. Statistical Analysis . developers. which can be repeated throughout the application life cycle  Provides Analysis and Decision Support  Graphs and reports help analyze application readiness at any point in the testing process  Requirements coverage.Ability to draw statistically valid conclusions from quantitative test results. test execution progress. The following measurements generated during testing are applicable: Total number of tests Number of Tests executed to date Number of tests executed successfully to date Data concerning software defects include Total number of defects corrected in each activity Total number of defects entered in each activity. Anywhere access to Test Assets  Using Test Director’s web interface. they should prepare a report on their results.9 . For example. there is no reason to print that.  Integration Test Report Performance Testing Process & Methodology 103 Proprietary & Confidential - .The test report can be a combination of electronic data and hard copy. once the software has been integrated and functional testing is complete.Average duration between defect detection and defect correction Average effort to correct a defect Total number of defects remaining at delivery Software performance data us usually generated during system testing. The second long term purpose is to use the data to analyze the rework process for making changes to prevent the defects from occurring in the future. Average CPU utilization Average memory Utilization Measured I/O transaction rate Test Reporting A final test report should be prepared at the conclusion of each test activity. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production . These defect prone components identify tasks/steps that if improved. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. draws appropriate conclusions and present recommendations.Purpose of a Test Report: The test report has one immediate and three long term purposes. to assess the potential consequences and initiate appropriate actions to minimize those consequences. The first of the three long term uses is for the project to trace problems in the event the application malfunctions in production.when different testers should test individual projects. and if so. could eliminate or minimize the occurrence of high frequency defects. This includes the following  Individual Project Test Report  Integration Test Report  System Test Report  Acceptance test Report These test reports are designed to document the results of testing as defined in the testplan.  Individual Project Test Report These reports focus on the Individual projects(software system). The Third long term purpose is to show what was accomplished in case of an Y2K lawsuit. if the function matrix is maintained electronically. as paper report will summarize the data. how was it to be tested. Knowing which functions have Performance Testing Process & Methodology 104 Proprietary & Confidential - .  The project can use the test report to trace problems in the event the application malfunction in production.Scope of Test – This section indicates which functions were and were not tested 2.  Acceptance Test Report There are two primary objectives of Acceptance testing Report . Recommendations – This section recommends actions that should be taken to Fix functions /Interfaces that do not work.Integration testing tests the interfaces between individual projects. 1.  One Long term objective is for the Project and the other is for the information technology function. Given is the Individual Project test report except that conditions tested are interfaces. including any variance between what is and what should be 3. time pressures. The system test Report should present the results of executing the test plan. what was to be tested. 13.2. The Acceptance Test Report should encompass these criteria’s for the User acceptance respectively. then it need only be referenced .2 Conclusion The Test Logs obtained from the execution of the test results and finally the test reports should be designed to accomplish the following objectives:  Provide Information to the customer whether the system should be placed into production. if so the potential consequences and appropriate actions to minimize these consequences. If these details are maintained Electronically . which includes people skills and attitudes. If the defined requirements are those true needs.This section defines the functions that work and do not work and the interfaces that work and do not work 4. Make additional improvements  System Test Reports A System Test plan standard that identified the objective of testing . not included in the report. and when tests should occur. The second objective is to ensure that software system can operate in the real world user environment. testing should have accomplished this objective. A good test plan will identify the interfaces and institute test conditions that will validate interfaces.What works/What does not work .Test Results – This section indicates the results of testing.The first is to ensure that the system as implemented meets the real operating needs of the user/customer. changing business conditions. and so forth.  These defect prone components identify tasks/steps that if improved. Performance Testing Process & Methodology 105 - Proprietary & Confidential - .been correctly tested and which ones still contain defects can assist in taking corrective actions.  The data can also be used to analyze the developmental process to make changes to prevent defects from occurring in the future. could eliminate or minimize the occurrence of high frequency defects in future. 1 Executive Summary This section would comprise of general information regarding the project.e. 1. the end users and a brief outline of the functionality as well.14 Test Report A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. the client. Application Overview – This would include detailed information on the application under test. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort. the application. Overview This comprises of 2 sections – Application Overview and Testing Scope. tools and people involved in such a way that it can be taken as a summary of the Test Report itself (i.) all the topics mentioned here would be elaborated in the various sections of the report. Performance Testing Process & Methodology 106 Proprietary & Confidential - . Contents of a Test Report The contents of a test report are as follows: Executive Summary Overview Application Overview Testing Scope Test Details Test Approach Types of testing conducted Test Environment Tools Used Metrics Test Results Test Deliverables Recommendations These sections are explained as follows: 14. This section would also contain information of Operating System / Browser combinations if Compatibility testing is included in the testing effort.) Functional. 3. Types of Testing conducted. Test Environment and Tools Used. Performance Testing Process & Methodology 107 Proprietary & Confidential - . number of defects found etc. Metrics This section would include details on total number of test cases executed in the course of the project. Test Environment – This would contain information on the Hardware and Software requirements for the project (i. This could include information on how coordination was achieved between Onsite and Offshore teams. project tracking tools or any other tools which made the testing work easier.) server configuration.e.Testing Scope – This would clearly outline the areas of the application that would / would not be tested by the QA team. Test Details This section would contain the Test Approach. This is done so that there would not be any misunderstandings between customer and QA as regards what needs to be tested and what does not need to be tested. how information and daily / weekly deliverables were delivered to the client etc. client machine configuration. Performance. 2. Test Approach – This would discuss the strategy followed for executing the project. Usability etc along with related specifications. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. They could be functional or performance testing automation tools. Compatibility. Types of testing conducted – This section would mention any specific types of testing performed (i. any innovative methods used for automation or for reducing repetitive workload on the testers. specific software installations required etc. Tools used – This section would include information on any tools that were used for testing the project. This can be used in calculating the efficiency of the testing effort. defect management tools.e. ) how many were fixed and how many rejected etc.4. Test Procedures. Recommendations This section would include any recommendations from the QA team to the client on the product tested. graphs can be generated accordingly and depicted in this section. Defects based on Status (i. Release Report etc. 6. Performance Testing Process & Methodology 108 - Proprietary & Confidential - .e. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application. Test Results This section is similar to the Metrics section. The graphs can be for Defects per build.e. Incase many defects have been logged for the project.) Test Plan. Test Logs. 5. Defects based on severity. Test Deliverables This section would include links to the various documents prepared in the course of the testing project (i. but is more for showcasing the salient features of the testing effort. This evaluation is described as system reliability growth modelling Performance Testing Process & Methodology 109 - Proprietary & Confidential - . So in this context defects are identified as any failure to meet the system requirements. Defect evaluation is based on methods that range from simple number count to rigorous statistical modeling. 15. Defects include such things as omissions and imperfections found during testing phases. An evaluation of defects discovered during testing provides the best indication of software quality.2 Defect Fundamentals A Defect is a product anomaly or flaw. Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the testing process. The actual data about defect rates are then fit to the model.15 Defect Management 15. A software error is present when the program does not do what its end user expects it to do. Such an evaluation estimates the current system reliability and predicts how the reliability will grow if testing and defect removal continue. Symptoms (flaws) of faults contained in software that is sufficiently mature for production will be considered as defects.1 Defect A mismatch in the application and its specification is a defect. Deviations from expectation that is to be tracked and resolved is also termed a defect. Quality is the indication of how well the system meets the requirements. it must be reported to development so that it can be fixed. Invalid Bug – The reported bug is not valid one as per the requirements/design Proprietary & Confidential - Performance Testing Process & Methodology 110 - .   The Initial State of a defect will be ‘New’.1 Defect Life Cycle 15.3 Defect Tracking After a defect has been found. The Project Lead of the development team will review the defect and set it to one of the following statuses: Open – Accepts the bug and assigns it to a developer.2.15.    15. they will set to Dev Waiting After the development team has fixed the defect. which will follow the same cycle as an open defect. it is set to Closed.4 Defect Classification The severity of bugs will be classified as follows: Performance Testing Process & Methodology 111 - Proprietary & Confidential - . the status is set to FIXED. If the fixed defect satisfies the requirements/passes the test case. Duplicate – The bug has already been reported. the status is set to REOPENED. which means the defect is ready to re-test. Document – Once it is set to any of the above statuses apart from Open. On re-testing the defect. and the defect still exists. and the testing team does not agree with the development team it is set to document status.As Designed – This is an intended functionality as per the requirements/design Deferred –This will be an enhancement.  Once the development team has started working on the defect the status is set to WIP ((Work in Progress) or if the development team is waiting for a go ahead or some technical feedback. Performance Testing Process & Methodology 112 - Proprietary & Confidential - . 5) Summarize what you think the problem is. The problem affects selected processing. These may be cosmetic problems that hamper usability or divulge clientspecific information. Simple problems can have a simple report. 15. Performance Testing Process & Methodology 113 Proprietary & Confidential - . or could cause a user to make an incorrect decision or entry. the easier it will be for the developers to determine the problem and fix it. and they need to take corrective action within 48 . The problem is cosmetic. and they need to take corrective action within 24 . The Development Team must be informed immediately and they need to take corrective action immediately.Critical High Medium Low The problem prevents further processing and testing. When you are reporting a defect the more information you supply. No data loss is suffered. The Development Team must be informed within 24 hours. The problem affects selected processing to a significant degree. and they need to take corrective action within 0 – 24 hours. but has a work-around that allows continued processing and testing. The Development Team must be informed that day.48 hours. 4) Supply a copy of all relevant reports and data including copies of the expected results. Cause data loss. This can be broken down into 5 points: 1) Give a brief description of the problem 2) List the steps that are needed to reproduce the bug or problem 3) Supply all relevant information such as version. but the more complex the problem– the more information the developer is going to need.96 hours. making it inoperable.5 Defect Reporting Guidelines The key to making a good report is providing the development staff with as much information as necessary to reproduce the bug. The Development Team must be informed within 48 hours. and/or does not affect further processing and testing. project and data used. This includes spread sheets. b) the level of the processing. In most cases the more information– correct information– given the better. (Expected results) 3) The source of the expected results. In either case. 7) If specific data is involved. how to get it and what needs to be changed. (Perceived results) 5) An explanation of how the results differed. an earlier version of the software and any formulas used) 4) Documentation on what actually happened. In most cases the product is not static. c) the complexity of reproducing the bug.For example: cosmetic errors may only require a brief description of the screen. As a rule the detail of your report will increase based on a) the severity of the bug. an error in processing will require a more detailed description. 2) Documentation on what was expected. They have to give developers something to work with so that they can successfully reproduce the problem. Proprietary & Confidential - Product: Performance Testing Process & Methodology 114 - . such as: 1) The name of the process and how to get to it. However. if available. 6) Identify the individual items that are wrong. developers will have been working on it and if they’ve found a bug– it may already have been reported or even fixed. they need to know which version to use when testing out the bug. a copy of the data both before and after the process should be included. If you are developing more than one product– Identify the product in question. The basic items in a report are as follows: Version: This is very important. Anatomy of a bug report Bug reports need to do more than just describe the bug. The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is. 8) Copies of any output should be included. don’t abbreviate and don’t assume anything. Include what you expected. Steps: List the steps taken to recreate the bug. supply the exact data entered. If the process is a report. If you have to enter any data. If you’re reporting a processing error. one before the process and one after. If the dataset from before the process is not included. If you have a report to compare against. Supporting documentation: If available.Try to weed out any extraneous information. After you’ve finished writing down the steps. you should include two versions of the dataset. include a copy of the report with the problem areas highlighted. Performance Testing Process & Methodology 115 Proprietary & Confidential - . Description: Explain what is wrong . developers will be forced to try and find the bug based on forensic evidence. such as a cosmetic error on a screen. Testers will need this information for later regression testing and verification. supply documentation. If there are parameters. The developers need it to reproduce the bug.Data: Unless you are reporting something very simple. identify it and fix it. list them. but detail what is wrong. you should include a dataset that exhibits the error. include the version number and the dataset used) This information should be stored in a centralized location so that Developers and Testers have access to the information. Remember report one problem at a time. follow them . With the data. Include all proper menu names. developers can trace what is happening. include it and its source information (if it’s a printout from a previous version. When you report the steps they should be the clearest steps to recreating the bug. don’t combine bugs in one report. Include a list of what was expected. Go through the process again and see if there are any steps that can be removed.make sure you’ve included everything you type and do to get to the problem. 15.5.1 Summary A bug report is a case against a product. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well. It is not enough to say that something is wrong. The report must also say what the system should be doing. The report should be written in clear concise steps, so that someone who has never seen the system can follow the steps and reproduce the problem. It should include information about the product, including the version number, what data was used. The more organized information provided the better the report will be. Performance Testing Process & Methodology 116 - Proprietary & Confidential - 16 Automation What is Automation Automated testing is automating the manual testing process currently in use 16.1 Why Automate the Testing Process? Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission-critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification, and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability. In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive, and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations. Every organization has unique reasons for automating software quality activities, but several reasons are common across industries. Using Testing Effectively By definition, testing is a repetitive activity. The very nature of application software development dictates that no matter which methods are employed to carry out testing (manual or automated), they remain repetitious throughout the development lifecycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation allows the tester to reduce or eliminate the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse or press the enter key. An automated test executes the next operation in the test hierarchy at machine speed, allowing tests to be completed many times faster than the fastest individual. Furthermore, some types of testing, such as load/stress testing, are virtually impossible to perform manually. Performance Testing Process & Methodology 117 - Proprietary & Confidential - Reducing Testing Costs The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster, and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. To do the testing manually, 50 application users employing 50 PCs with associated software, an available network, and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users. As another example, imagine the same application used by hundreds or thousands of users. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare. Replicating Testing Across Different Platforms Automation allows the testing organization to perform consistent and repeatable tests. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently. Repeatability and Control By using automated techniques, the tester has a very high degree of control over which types of tests are being performed, and how the tests will be executed. Using automated tests enforces consistent procedures that allow developers to evaluate the effect of various application modifications as well as the effect of various user actions. For example, automated tests can be built that extract variable data from external files or Performance Testing Process & Methodology 118 Proprietary & Confidential - applications and then run a test using the data as an input value. Most importantly, automated tests can be executed as many times as necessary without requiring a user to recreate a test script each time the test is run. Greater Application Coverage The productivity gains delivered by automated testing allow and encourage organizations to test more often and more completely. Greater application test coverage also reduces the risk of exposing users to malfunctioning or non-compliant software. In some industries such as healthcare and pharmaceuticals, organizations are required to comply with strict quality regulations as well as being required to document their quality assurance efforts for all parts of their systems. 16.2 Automation Life Cycle Identifying Tests Requiring Automation Performance Testing Process & Methodology 119 Proprietary & Confidential - Examples include: creating customer records. These automated modules can be used again and again without having to rebuild the test scripts. For example. High Path Frequency . If the application fails. Critical Business Processes . the greater the benefits are from automation. tests that run only once. Any application with a high-degree of risk associated with a failure is a good candidate for test automation. Applications with a Long Life Span . invoicing and other high volume activities where software failures would occur frequently. but not all. Certain types of tests like user comprehension tests. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. software applications can literally define or control the core of a company’s business.Most. sales order entry and other core activities.If an application is planned to be in production for a long period of time. and tests that require constant human intervention are usually not worth the investment to automate. common outline files can be created to establish a testing session. close a testing session and apply testing values. types of tests can be automated. the company can face extreme disruptions in critical operations. Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings. production planning. it is also a prime candidate for automation.In many situations. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test.Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. What to Look For in a Testing Tool Performance Testing Process & Methodology 120 - Proprietary & Confidential - .If a testing procedure can be reused many times. Repetitive Testing . Also. GUI and Client/Server Testing A robust testing tool should support testing with a variety of user interfaces and create simple-to manage. Test components built for performing functional tests should also support other types of testing including regression and load/stress testing. and create meaningful end-user and management reports. which should be addressed when selecting an application testing solution. The tests created for testing Internet or intranet-based applications should be portable across browsers. provide organization for testing components. Testing Product Integration Testing tools should provide tightly integrated modules that support test component reusability. With much of the testing responsibility shifting from the development staff to the departmental level. and should automatically adjust for different load times and performance levels. Here are several key issues.Choosing an automated software testing tool is an important step. Ease of Use Testing tools should be engineered to be usable by non-programmers and application end-users. Test Planning and Management A robust testing tool should have the capability to manage the testing process. the testing tool itself should have a short learning curve. User training and experience gained in performing one testing task should be transferable to other testing tasks. Load and Performance Testing Performance Testing Process & Methodology 121 Proprietary & Confidential - . the architecture of the testing tool environment should be open to support interaction with other technologies such as defect or bug tracking packages. Finally. A robust tool will allow users to integrate existing test results into an automated test plan. easy-to-understand language. Internet/Intranet Testing A good tool will have the ability to support testing within the scope of a web browser. and one which often poses enterprise-wide implications. Test component reusability should be a cornerstone of the product architecture. easy-to-modify tests. Even if programmers are responsible for testing. an automated test should be able to link business requirements to test results. It should also allow users to include non-automated testing procedures within automated test plans and test results. allowing users to evaluate application readiness based upon the application's ability to support the business requirements. a testing tool that requires programming skills is unusable by most organizations. All products within the testing product environment should be based upon a common. Restoration Procedures .Outline the procedures necessary to install the application software to be tested. The test environment is defined as the complete set of steps necessary to execute the test as described in the test plan.3 Preparing the Test Environment Once the test cases have been created. Inputs to the Test Environment Preparation Process Technical Environment Descriptions Approved Test Plan Test Execution Schedules Resource Allocation Schedule Application Software to be installed Performance Testing Process & Methodology 122 - Proprietary & Confidential - . Installation Procedures . The test environment includes initial set up and description of the environment. It should also provide test results in an easy-to-understand reporting format. 16.Identify the times during which your testing facilities will be used for a given test.Document the technical environment needed to execute the tests. Operational Support . you are ready to re-execute tests or prepare for a different set of tests.Finally. Description . Test Schedule . and the procedures needed for installation and restoration of the environment.The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. the test environment can be prepared. By doing this.Identify any support needed from other parts of your organization. outline those procedures needed to restore the test environment to its original state. Make sure that other groups that might share these resources are informed of this schedule. Inputs to the Test Planning Process Application Requirements . Evaluating Business Requirements Begin the automated testing process by defining exactly what tasks your application software should accomplish in terms of the actual business activities of the end-user. This enables the testing team to define the tests. For example. functional requirements of the software system in question. a testing plan should be created at the same time the software application requirements are defined. To guarantee the best possible result from an automated testing program.When is the scheduled release? When are updates or enhancements planned? Are there any specific events or actions that are dependent upon the application? Performance Testing Process & Methodology 123 - Proprietary & Confidential - . A good testing plan should be reviewed and approved by the test team. or to print a salary check. The following items detail the input and output components of the test planning process.What is the application intended to do? These should be stated in the terms of the business requirements of the end users. The definition of these tasks. a business requirement for a payroll application might be to calculate a salary. Application Implementation Schedules . The time invested in detailed planning significantly improves the benefits resulting from test automation. those evaluating test automation should consider these fundamental planning steps. These business requirements should be defined in such a way as to make it abundantly clear that the software system correctly (or incorrectly) performs the necessary business functions. This plan is very much a “living document” that should evolve as the application functions become more clearly defined. or business requirements. Creating a Test Plan For the greatest return on automated testing. all user groups and the organization’s management. defines the high-level.Test Planning Careful planning is the key to any successful process. locate and configure test-related hardware and software products and coordinate the human resources required to complete all testing. the software development team. so that the results of these test elements can be traced and analyzed.Each test case must have a unique name. run orders and dependencies that might exist between test cases. Test Procedures – Identify the application steps necessary to complete the test case. the action to be completed.Document all screen identifier(s) and expected value(s) that must be verified as part of the test. Test Design and Development After the test components have been defined. the procedures for applying those inputs. the standardized test cases can be created that will be used to test the application. and the expected application values for the procedure being tested.Take note of the sources for extracting test data if it is not included in the test case. Test Case Execution Order . A test case identifies the specific input values that will be sent to the application.This section of the test case identifies the values to be supplied to the application as input including. A proper test case will include the following key components: Test Case Name(s) . Test Case Prerequisites . Expected Results .Specify any relationships. The type and number of test cases needed will be dictated by the testing plan. if necessary. Input Values .Identify set up or testing criteria that must be established before a test can be successfully executed.Acceptance Criteria for implementation . These expected results will be used to measure the acceptance criteria. Test Data Sources . Inputs to the Test Design and Construction Process Test Case Documentation Standards Test Case Naming Standards Approved Test Plan Business Process Documentation Business Process Flow Test Data sources Outputs from the Test Design and Construction Process Performance Testing Process & Methodology 124 Proprietary & Confidential - . and therefore the ultimate success of the test.What critical actions must the application accomplish before it can be deployed? This information forms the basis for making informed decisions on whether or not the application is ready to deploy. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. a test execution may be required for the functional testing of an application. documents the results. For example. repeatable. Additionally. and validates those results against expected performance. and a separate test execution cycle may be required for the stress/volume testing of the same application. The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process.Revised Test Plan Test Procedures for each Test Case Test Case(s) for each application function described in the test plan Procedures for test set up. This step of the process can range from very chaotic to very simple and schedule driven. This step applies the test cases identified by the test plan. test execution and restoration Executing the Test The test is now ready to be run. Without an adequate test plan in place to control your entire test process. Performance Testing Process & Methodology 125 Proprietary & Confidential - . test execution environment Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment The test execution phase of your software test process will control how the test gets applied to the application. you may inadvertently cause problems for subsequent testing.Activities within the test execution are logged and analyzed as follows: Actual Results achieved during test execution are compared to expected application behavior from the test cases Test Case completion status (Pass/Fail) Actual results of the behavior of the technical test environment Deviations taken from the test plan or test process Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized. there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. Specific performance measurements of the test execution phase include: Application of Test Cases – The test cases previously created are applied to the target software application as described in the testing environment Documentation . The secret to a controlled test execution is comprehensive planning. noting those that passed. Test Execution Statistics . failed or were not executed. This sometimes reduces the effort over the completely manual approach. Tools like WinRunner provide a scripting language. the type of test. These sequences are played back during the test execution. Specific elements to be measured and analyzed include: Test Execution Log Review . the capture/playback session will need to be completely re-run to capture the new sequence of user interactions. including application processes that need to be analyzed further. The benefit of this approach is that the time consumed is less and accurate than manually testing it. Determine Application Status . 16.This step identifies the overall status of the application after testing. etc.4 Automation Methods Capture/Playback Approach The Capture/Playback tools capture the sequence of manual operations in a test script that are entered by the test engineer. Data Driven Approach Data driven approach is a test that plays back the same user actions but with varying input values. Application Defects . This is applicable when large volumes and different sets of data need to be fed to the application and tested for correctness. Testing can be done with both positive and negative approach simultaneously. however overall savings is usually minimal. The benefit of this approach is that the captured session can be re-run at some later point in time to ensure that the system performs the required behavior.Measuring the Results This step evaluates the results of the test as compared to the acceptance criteria set down in the test plan. and it is possible for engineers to edit and maintain such scripts. for example: ready for release. needs more testing. Test Script execution: Performance Testing Process & Methodology 126 - Proprietary & Confidential - . if the system functionality changes.This final and very important report identifies potential defects in the software. and the completion status. This allows one script to test multiple sets of positive data.This summary identifies the total number of tests that were executed.The Log Review compiles a listing of the activities of all test cases. The short-comings of Capture/Playback are that in many cases. 6. 4. playback options.In this phase we execute the scripts that are already created. Test script execution process: Performance Testing Process & Methodology 127 - Proprietary & Confidential - . 3.Analysis the results via Test manager or in the logs.Select the script that needs to be executed and run it… 5. Prerequisite for running the scripts such as tool settings. 2. necessary data table or data pool updation needs to be taken care.Test tool to be installed in the machine. Scripts need to be reviewed and validated for results and accepted as functioning as expected before they are used live. Test environment /application to be tested to be installed in the machine. Steps to be followed before execution of scripts: 1.Wait until execution is done. Test tool ready Test ready application Tool settings. Playback options Script execution Result analysis Defect management Performance Testing Process & Methodology 128 - Proprietary & Confidential - . 17. and Segue. tool integration capability. Eventually record and playback becomes less and less part of the automation process as it is usually Performance Testing Process & Methodology 129 Proprietary & Confidential - . Rational. Does the tool support low-level recording (mouse drags.2 Record and Playback This category details how easy it is to record & playback a test. 5 = No support. exact screen location)? Is there object recognition when recording and playing back or does it appear to record ok but then on playback (without environment change or unique id’s. look at the code and then playback. etc changes) fail? How easy is it to read the recorded script. 17. 1 = Excellent support for this functionality. Usually the lower the score the better but this is subjective and is based on the experience of the author and the test professionals opinions used to create this document. 3 = Basic/ support only. This is very similar to recording a macro in say Microsoft Access. The best tool for any particular situation depends on the system engineering environment that applies and the testing methodology that will be used. test execution capability. and vendor qualification. Mercury. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average. This appendix evaluates major tool vendors on their test tool characteristics. Each category in the matrix is given a rating of 1 – 5. Empirix/RSW. A detailed description is given below of each of the categories used in the matrix. which in turn will dictate how automation will be invoked to support the process. They will record a simple script. 2 = Good support but lacking or another tool provides more effective support. performance testing and analysis.17 General automation tool comparison Anyone who has contemplated the implementation of an automated test tool has quickly realized the wide variety of options on the market in terms of both the kinds of test tools being offered and the number of vendors.1 Functional Test Tool Matrix The Tool Matrix is provided for quick and easy reference to the capabilities of the test tools. In general a set of criteria can be built up by using this matrix and an indicative score obtained to help in the evaluation process. When automating. test reporting capability. The following tool vendors evaluated are Compuware. this is the first thing that most test professionals will do. 17. databases. • Are there facilities that will allow me to programmatically look for objects of a certain type on a web page or locate a specific object? • Can I extract data from the web page itself? E. connect using MAC. In judging the rating for this category I looked at the tools native support for HTML tables. Web testing can be riddled with problems if various considerations are not taken into account. Web site maps and links. A person may be connecting from the USA or Africa.3 Web Testing Web based functionality on most applications is now a part of everyday life. etc. does it contain data. the title? A hidden form element? With Client server testing the target customer is usually well defined you know what network operating system you will be using. As such the test tool should provide good web based test functionality in addition to its client/server functions. the applications and so on but on the web it is far different. various platforms for browsers. However this should be done as a minimum in the evaluation process because if the tool of choice cannot recognize the applications objects then the automation process will be a very tedious experience. they may use various browsers. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined. and the screen resolution on their computer will be different. frames. Performance Testing Process & Methodology 130 - Proprietary & Confidential - . etc. will have fast connections and slow connections. etc.more robust to use the built-in functions to directly test objects. they may be disabled. They will speak different languages. Linux or Windows.g. etc. DOM. Here are a few examples • Are there functions to tell me when the page has finished loading? • Can I tell the test tool to wait until an image appears? • Can I test whether links are valid or not? • Can I test web based objects functions like is it enabled. g. Frameworks provide an interface to all the applications under test by exposing a suitable list of functions. etc with variables supplied from an external source usually a CSV (Comma Separated variable) file. The added benefit (as I have found) is this functionality can be used for a production reason e. Does the tool allow you to specify the type of data you want? Can you automatically generate data? Can you interface with files.g. for the aforementioned bulk data input sometimes carried out in data migration or application upgrades. As such. DB2.17. address. extract data? Can you randomise the access to that data? Is the data access truly random? This functionality is normally more important than database tests as the databases will usually have their own interface for running queries. spreadsheet or database.5 Data Functions As mentioned above applications usually provide a facility for storing data off line. Frameworks are usually the ultimate goal in deploying automation test tools. This is usually achieved by holding the data in a Database. SQLServer. databases etc and expose Performance Testing Process & Methodology 131 Proprietary & Confidential - . a variable. So to test this. etc to create. Oracle. Sybase. How does the tool manipulate this returned data? Can it call stored procedures and supply required input variables? What is the range of functions supplied for this testing? 17. a cursor. I have looked at all the tools facilities for creating and manipulating data. spreadsheets. ODBC and how they hold returned data e. Because of the many databases available e. numbers. checking what is in the backend database usually verifies the proper validation of tests carried out on the front end of an application. Ingres. is this in an array. I have looked at all the tools support for SQL. Data-driven tests are tests that replace hard coded names. databases. A test framework has parallels to Software frameworks where you develop an encapsulation layer of software (framework) around the applications. However applications (except for manual input) do not usually provide facilities for bulk data input. This allows an inexperienced tester/user to run tests by just running/providing the test framework with know commands/variables. etc all of them support a universal query language known as SQL and a protocol for communicating with these databases called ODBC (JDBC can be used on java environments). etc. to data-driven to framework testing. etc.4 Database Tests Most applications will provide the facility to preserve data outside of itself.g. we will need to create data to input into the application. These functions are also very important as you move from the record/playback phase. Informix. However you may find that most (hopefully) of the application has been implemented using standard objects supported by your test tool vendor but there may be a few objects that are custom ones. classes. input data. 17. methods etc that is used to call the underlying applications. try to get the development/design team to use standard and not custom objects.6 Object Mapping If you are in a role that can help influence the design of a product. Also when the application has painted controls like those in the calculator app found on a lot of windows applications you may need to use this.7 Image Testing Lets hope this is not a major part of your testing effort but occasionally you may have to use this to test bit map and similar images. However to do this requires a lot of time. Most custom objects will behave like a similar standard control here are a few standard objects that are seen in everyday applications. Then hopefully you will not need this functionality. skilled resources and money to facilitate the first two. Does the tool provide OCR (optical character recognition)? Can it compare one image against another? How fast does the compare take? If the compare Performance Testing Process & Methodology 132 Proprietary & Confidential - . etc.functions. • • • • • • Pushbuttons Checkboxes Radio buttons List views Edit boxes Combo boxes If you have a custom object that behaves like one of these are you able to map (tell the test tool that the custom control behaves like the standard) control? Does it support all the standard controls methods? Can you add the custom control to it’s own class of control? 17. At least one of the tools allows you to map painted controls to standard controls but to do this you have to rely on the screen co-ordinates of the image. return data. how it recovers from errors. can be programmed to reference windows and object names in one place (say via a variable) and that variable can be used throughout the script (where that object appears). I have looked at these facilities in the base tool set. How easy is it to build this into your code? The rating given will depend on how much errors the tool can capture. the types of errors. it provides the foundation to produce a truly robust test suite. Suppose the application crashes while I am testing what can I do? If a function does not receive the correct information how can I handle this? If I get an error message how do I deal with that? If I access a web site and get a warning what do I do? I cannot get a database connection how do I skip those tests? The test tool should provide facilities to handle the above questions.fails how long does that take? Does the tool allow you to mask certain areas of the screen when comparing. etc. The last and least desirable should be by co-ordinates on the screen. index. 17. I looked at built in wizards of the test tools for standard test recovery (when you finish tests or when a script fails).9 Object Name Map As you test your application using the test tool of your choice you will notice that it records actions against the objects that it interacts with. 17. All tools provide a search and replace facility but the best implementations are those that provide a central repository to store these object identities. Does the Object Name Map allow you to alias the name or change the name given by the tool to some more meaningful name? Performance Testing Process & Methodology 133 - Proprietary & Confidential - . Once you are well into automation and build up 10’s and 100’s of scripts that reference these objects you will want to have a mechanism that provides an easy update if the application being tested changes. I found this to be true but not as big a point as some have stated because those tools that don’t support the central repository scheme. The premise is it is better to change the reference in one place rather than having to go through each of the scripts to replace it there. Error recovery caused by the application and environment. These objects are either identified through the co-ordinates on the screen or preferably via some unique object reference referred to as a tag. object ID. etc. name.8 Test/Error recovery This can be one of the most difficult areas to automate but if it is automated. Firstly the tool should provide services to uniquely identify each object it interacts with and by various means. Some tools provide extension by allowing you to create user defined functions. If via DLL’s then the tester must have knowledge of a traditional development language e. Register it on the machine then reference that dll from the test tool calling the methods according to their specification. create a class containing various methods (similar to functions) then I would make a dll file. 17. The tool should give you details of some of the object’s properties. However when this is encountered the tool should support language extension. • • • • I don’t know It can’t do it It can do it using the function x.17. A sort of spy that looks at the internals of the object giving you details like the object name. especially those associated with uniquely identifying the object or window.11 Extensible Language Here is a question that you will here time and time again in automation forums. A lot of the tools will allow you to search all the open applications in one swoop and show you the result in a tree that you can look at when required. The tool will usually provide the tester with a point and ID service where you can use the mouse to point at the object and in some window you will see all of that objects ID’s and properties. C. For instance if I wanted to extend a tool that could use DLL’s created by VB I would need to have Visual Basic then open say an ActiveX dll project. C++ or VB. etc but these are normally a mixture of the Performance Testing Process & Methodology 134 Proprietary & Confidential - . This will allow you to reference that object within a function call. classes. ID and similar. y or Z It can’t in the standard language but you can do it like this What we are concerned with in this section is the last answer e.g. “How do I get {insert test tool name here} to do such and such”. if the standard test language does not support it can I create a DLL or extend the language in some way to do it? This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 – 12 months. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics like this in extending the tools capabilities. methods. there will be one of four answers.g.10 Object Identity Tool Once you become more proficient with automation testing one of the primary means of identifying objects will be via an ID Tool. etc? Integration becomes very important rather than having separate systems that don’t share data that may require duplication of information. Environment support. The management aspect and the tools integration moves further up the importance ladder. How do I manage this? This is where a test management tool comes in real handy. what Oracle. functions. as those people who have got to this level should have already exhausted the current capabilities of the tools.000 of these can be automated. etc rather than extending the tool beyond it’s released functionality. dll’s etc that expose some of the applications details but whether a developer will or has time to do this is another question. Also how do I manage the bugs raised as a result of automation testing.000 test cases have been identified 20.already supported data types. This is becoming more and more important. The anticipated requirements for the new workflow software numbers in the thousands. WAP. An example could be a major Bank wants to redesign its workflow management system to allow faster processing of customer queries. Ultimately this is the most important part of automation. To test these requirements 40. Most tools can interface to unsupported environments if the developers in that environment provide classes. If the tool does not support your environment/application then you are in trouble and in most cases you will need to revert to manually testing the application (more shelf ware). etc.12 Environment Support How many environments does the tool support out the box? Does it support the latest Java release. So want to use external functions like win32api functions and so on and should have a good grasp of programming. 17. Performance Testing Process & Methodology 135 Proprietary & Confidential - . Powerbuilder. 17.13 Integration How well does the tool integrate with other tools. excel or requirements management tools? When managing large test projects with an automation team greater than five and testers totaling more than ten. Does the tool allow you to run it from various test management suites? Can you raise a bug directly from the tool and feed the information gathered from your test logs into it? Does it integrate with products like word. Because this is an advanced topic I have not taken into account ease of use. There are not many applications I know that cost this much per license not even some very advanced operating systems. Performance Testing Process & Methodology 136 Proprietary & Confidential - . In more cases than not they have agreed on which was the easiest to use (initially).900 . packages. data-driven tests. 17. On top of the above prices you usually pay an additional maintenance fee of between 10 and 20%. However this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. The bigger the supply the less the price as you can spread the development costs more.$5.15 Ease Of Use This section is very subjective but I have used testers (my guinea pigs) of various levels and got them from scratch to use each of the tools.000 (depending on quantity brought. integration. So you know the tools will all cost a similar price it is usually a case of which one will do the job for me rather than which is the cheapest.14 Cost In my opinion cost is the least significant in this matrix. why? Because all the tools are similar in price except Visual Test that is at least 5 times cheaper than the rest but as you will see from the matrix there is a reason. All the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language IDE like JBuilder or Visual C++ which are a lot more function rich and flexible than any of the test tools. etc) in the US and around £2. Price typically ranges from $2. script maintenance. etc are required.900 . 17.£5. The prices are kept this high because they can.000 in the UK for the base tools included in this document. However I do not anticipate a move on the prices upwards as this seems to be the price the market will tolerate. we have gone with the tool that integrated with the products we already had. Although very functional it does not provide the range of facilities that the other tools do. Visual Test I believe will prove to be a bigger hit as it expands its functional range it was not that long ago where it did not support web based testing. Visual Test also provides a free runtime license. Obviously this can change as the tester becomes more experienced and the issues of say extensibility. However it is all a matter of supply.The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some projects. help files and user manuals. Performance Testing Process & Methodology 137 - Proprietary & Confidential - . then Compuware. validity of responses from the helpdesk. debugging facilities. Rational and last Segue. We have found Mercury to be the best for support. 17. I have also included various other criteria like the availability of skilled resources.16 Support In the UK this can be a problem as most of the test tool vendors are based in the USA with satellite branches in the UK. online resources.17 Object Tests Now presuming the tool of choice does work with the application you wish to test what services does it provide for testing object properties? Can it validate several properties at once? Can it validate several objects at once? Can you set object properties to capture the application state? This should form the bulk of your verification as far as the automation process is concerned so I have looked at the tools facilities on client/server as well as web based applications. 3 = Basic/ support only. However having said that you can find a lot of resources for Segue on the Internet including a forum at www. 2 = Good support but lacking or another tool provides more effective support. 17. Each category in the matrix is given a rating of 1 – 5. 1 = Excellent support for this functionality.com that can provide most of the answers rather than ringing the support line. Just from my own experience and the testers I know in the UK. layout on screen.betasoft.Ease of use includes out the box functions. speed of responses and similar 17. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average. 5 = No support. On their website Segue and Mercury provide many useful user and vendor contributed material.18 Matrix What will follow after the matrix is a tool-by-tool comparison under the appropriate heading (as listed above) so that the user can get a feel for the tools functionality side by side. Object Mapping Object Name Map Data functions Support 1 2 2 2 2 1 1 1 2 1 - Object Identity Tool Environment support Extensible Language Record & Playback Image testing Test/Error recovery Database tests Web Testing WinRunne 2 r QA Run 1 Silk Test Visual Test Robot 1 3 1 1 2 2 3 2 1 1 1 4 1 2 2 2 3 1 1 1 1 2 1 1 1 1 2 1 2 2 1 2 2 1 2 1 4 4 2 1 2 1 1 2 2 1 2 1 1 2 2 3 2 1 1 3 2 1 3 2 3 1 2 2 2 3 3 1 17.19 Matrix score • • • • • Win Runner = 24 QARun = 25 SilkTest = 24 Visual Test = 39 Robot = 24 Performance Testing Process & Methodology 138 - Proprietary & Confidential Object Tests Cost Ease of use Integration . Some of their products are worldwide leaders e. Rational Purify is a comprehensive C/C+ + run-time error checking tool that automatically pinpoints run-time errors and memory leaks in all components of an application. you can access it in Test Manager. enabling developers to quickly find. When it comes to Object Oriented development they are the acknowledged leaders with most of the leading OO experts working for them. Clear case. you can manage every type of change activity associated with software development. RequisitePro organizes your requirements by linking Microsoft Word to a requirements repository and providing traceability and change management throughout the project lifecycle.1 Rational Suite of tools Rational RequisitePro is a requirements management tool that helps project teams control the development process. prioritize and eliminate performance bottlenecks within an application.18 Sample Test Automation Tool Rational offers the most complete lifecycle toolset (including testing) of these vendors for the windows platform. etc. ensuring that code is reliable Rational Quantify is an advanced performance profiler that provides application performance analysis. When you define a test requirement in RequisitePro. A client/server system includes client applications accessing a database or application server. With Clear Quest. and browsers accessing a Web Performance Testing Process & Methodology 139 Proprietary & Confidential - . Rational Suite Performance Studio is a sophisticated tool for automating performance tests on client/server systems. preventing untested code from reaching the end-user. including enhancement requests. and documentation modifications. Rational Pure Coverage is a customizable code coverage analysis tool that provides detailed application analysis and ensures that all code has been exercised.Their Unified Process is a very good development model that I have been involved with which allows mapping of requirements to use cases. A baseline version of RequisitePro is included with Rational Test Manager. 18. including third-party libraries. test cases and a whole set of tools to support the process. Rational Clear Quest is a change-request management tool that tracks and manages defects and change requests throughout the development process.g. RequistePro. Rational Rose. Rational Robot. defect reports. and run tests. The tools that are to discussed here are Rational Administrator Rational Robot Rational Test Manager 18. Rational Test categorizes test information within a repository by project. Performance Studio includes Rational Robot and Rational Load Test. including all GUI features and all lines of source code. and optionally places them under configuration management. and to capture and analyze the results. Automates testing by combining automatic test generation with source-code coverage analysis. Rational Test Factory. Tests an entire application. of users placing heavy loads and stress on your database and Web servers. users and groups and manage security privileges. Rational Robot. How to create a new project? Performance Testing Process & Methodology 140 - Proprietary & Confidential - . organize.2 Rational Administrator What is a Rational Project? A Rational project is a logical collection of databases and data stores that associates the data you use when working with Rational Suite. and multiple Rose models and RequisitePro projects. A Rational project is associated with one Rational Test data store.server. You can use the Rational Administrator to create and manage projects. even thousands. Use Robot to record client/server conversations and store them in scripts. Facilitates functional and performance testing by automating record and playback of test scripts. Use Load Test to schedule and play back the scripts. one RequisitePro database. Allows you to write. Rational administrator is used to create and manage rational repositories. one Clear Quest databases. Rational Load Test can emulate hundreds. During playback. In the corresponding window displayed. Click Next. In the above window opened enter the details like Project name and location. To manage the Requirements assets connect to Requisite Pro.Open the Rational administrator and go to File->New Project. to manage test assets create associated test data store and for defect management connect to Clear quest database. Click Finish. Performance Testing Process & Methodology 141 - Proprietary & Confidential - . In the configure project window displayed click the Create button. configure or delete the project. which is required to connect to. enter the Password if you want to protect the project with password. Accept the default path and click OK button. the below seen Create Test Data store window will be displayed.Once the Create button in the Configure project window is chosen. Performance Testing Process & Methodology 142 - Proprietary & Confidential - . Once the below window is displayed it is confirmed that the Test datastore is successfully created and click OK to close the window. Performance Testing Process & Methodology 143 - Proprietary & Confidential - . Click OK in the configure project window and now your first Rational project is ready to play with…. Rational Administrator will display your “TestProject” details as below: 18. Robot can be used to: Performance Testing Process & Methodology 144 Proprietary & Confidential - .3 Rational Robot Rational Robot to develop three kinds of scripts: GUI scripts for functional testing and VU and VB scripts for performance testing. Use Robot and TestManager together to record and play back scripts that help you determine whether a multi-client system is performing within user-defined standards under varying loads. Test objects even if they are not visible in the application's interface. PowerBuilder. Robot still finds them on playback. • • • • The Object-Oriented Recording technology in Robot lets you generate scripts quickly by simply running and using the application-under-test. The Object Testing technology in Robot lets you test any object in the applicationunder-test.4 Robot login window Performance Testing Process & Methodology 145 - Proprietary & Confidential - . Quantify. Test applications developed with IDEs such as Visual Basic. whether they are visible in the interface or hidden. not by screen coordinates. Perform full performance testing. HTML. Collect diagnostic information about an application during script playback. 18. If objects change locations or their text changes. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. Robot is integrated with Rational Purify. including the object's properties and data. and VU scripting environments.• Perform full functional testing. and Java. You can test standard Windows objects and IDE-specific objects. Robot uses ObjectOriented Recording to identify objects by their internal object names. Oracle Forms. and PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log. VB. Create and edit scripts using the SQABasic. Record and play back scripts that navigate through your application and test the state of objects through verification points. Go to File-> New->Script In the above screen displayed enter the name of the script say “First Script” by which the script is referred to from now on and any description (Not mandatory).Once logged you will see the robot window. 18.5 Rational Robot main window-GUI script Performance Testing Process & Methodology 146 - Proprietary & Confidential - 2 .The type of the script is GUI for functional testing and VU for performance testing. Console – Displays messages that you send with the SQAConsoleWrite command. Line numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. Also displays certain system messages from Robot. or debugging. Performance Testing Process & Methodology 147 Proprietary & Confidential - . The Output window bottom pane) has two tabs: • • Build – Displays compilation results for all scripts compiled in the last operation. To display the Output window: Click View ® Output. It has two panes: • • Asset pane (left) – Lists the names of all verification points and low-level scripts for this script. editing. Script pane (right) – Displays the script.The GUI Script top pane) window displays GUI scripts that you are currently recording. Record-> Stop 18.How to record a play back script? To record a script just go to Record->Insert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. Performance Testing Process & Methodology 148 - Proprietary & Confidential - .6 Record and Playback options Go to Tools-> GUI Record options the below window will be displayed. In this window we can set general options like identification of lists. For ex: Select a preference in the Object order preference list. Performance Testing Process & Methodology 149 - Proprietary & Confidential - . If you will be testing C++ applications.recording think time in General tab: Web browser tab: Mention the browser type IE or Netscape… Robot Window: During recording how the robot should be displayed and hotkeys details… Object Recognition Order: the order in which the recording is to happen . menus . change the object order preference to C++ Recognition Order. Robot creates an actual data file. During recording. mention the time out period. Performance Testing Process & Methodology 150 Proprietary & Confidential - . When you play back the script against a new build. If the captured object does not match the baseline.18.6.1 Playback options Go to Tools-> Playback options to set the options needed while running the script. to manage log and log data.7 Verification points A verification point is a point in a script that you create to confirm the state of an object across builds of the application-under-test. the verification point captures object information (based on the type of verification point) and stores it in a baseline data file. error recovery. Robot retrieves the information in the baseline file for each verification point and compares it to the state of the object in the new build. 18. This will help you to handle unexpected window during playback. The information in this file shows the actual state of the object in the build. The information in this file becomes the baseline of the expected state of the object during subsequent builds. Checks for the existence of a specified file Captures and compares the text. Because verification points are assets of a script. if you delete a script.1 List of Verification Points The following table summarizes each Robot verification point. you can select the verification point in the log and click View ® Verification Point to open the appropriate Comparator. Checks whether a specified module is loaded into a specified context (process). Robot also deletes all of its associated verification points. If a verification point fails (the baseline and actual data do not match). appears in the Script (right) pane. The Comparator displays the baseline and actual files so that you can compare them. Captures and compares the properties of objects. which always begins with Result =.7. A verification point is stored in the project and is always associated with a script. and state of menus. its name appears in the Asset (left) pane of the Script window. the results of each verification point appear in the log in Test Manager. Type Alphanumeric Clipboard File Comparison File Existence Menu Description Captures and compares alphabetic or numeric values. accelerator keys. Captures up to five levels of sub-menus. Captures and compares alphanumeric data that has been copied to the Clipboard. When you create a verification point. Captures and compares the data in objects. Compares the contents of two files. 18.After playback. You can easily copy verification points to other scripts if you want to reuse them. or is loaded anywhere in memory. Proprietary & Confidential - Module Existence Object Data Object Properties Performance Testing Process & Methodology 151 - . The verification point script command. 2. To open Global. Web Site Captures a baseline of a Web site and Compare compares it to the Web site at another point in time. 3. unless you specify another location. constants. Click the Preferences tab. If the file is not there. Web Site Scan Checks the content of a Web site with every revision and ensures that changes have not resulted in defects.Click File ® Open ® SQABasic File. 18.8 About SQABasic Header Files SQABasic header files let you declare custom procedures.10 Inserting a Comment into a GUI Script: During recording or editing. They can be accessed by all modules within the project. Performance Testing Process & Methodology 152 Proprietary & Confidential - . and variables that you want to use with multiple scripts or SQABasic library source files. you can insert lines of comment text into a GUI script. You can specify another location by clicking Tools ® General Options. Under SQABasic path.Region Image Captures and compares a region of the screen (as a bitmap). use the Browse button to find the location. and border are not captured). it will look in the SQABas32 directory. Global. Window Checks that the specified window is displayed Existence before continuing with the playback Window Image Captures and compares the client area of a window as a bitmap (the menu. title bar.sbh.sbh. SQABasic header files have the extension . You can add declarations to this global header file and/or create your own.9 Adding Declarations to the Global Header File For your convenience. Select global. 18.sbh: 1.sbh).sbh is a project-wide header file stored in SQABas32 in the project. Robot will check this location first. SQABasic files are stored in the SQABas32 folder of the project. Robot provides a blank header file called Global. 18. You can use Robot to create and edit SQABasic header files.sbh. and then click Open.Set the file type to Header Files (*. 18. Datapools let you automatically pump test data to virtual testers under highvolume conditions that potentially involve hundreds of virtual testers performing thousands of transactions. For example: ' This is a comment in the script To change lines of text into comments or to uncomment text: 1. Highlight the text. Robot inserts the comment into the script (in green by default) preceded by a single quotation mark. Proprietary & Confidential - Performance Testing Process & Methodology 153 - . Type the comment (60 characters maximum). If editing.Comments are helpful for documenting and editing scripts. 2. A single virtual tester that performs the same transaction multiple times can send realistic data to the server in each transaction. 1. Robot ignores comments at compile time. 4. It supplies data values to the variables in a script during script playback.11 About Data pools A datapool is a test dataset. If recording. you use a datapool so that: • • Each virtual tester that runs the script can send realistic data (which can include unique data) to the server. Click the Comment button on the GUI Insert toolbar. To insert a comment into a script during recording or editing. Click OK to continue recording or editing. 2. Typically. click the Display GUI Insert Toolbar button on the GUI Record toolbar. position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar. 3. Click Edit ® Comment Line or Edit ® Uncomment Line. you might want a datapool to supply those values during playback. Robot adds datapool commands to VU scripts automatically. If you plan to repeat the transaction multiple times during playback. Performance Testing Process & Methodology 154 Proprietary & Confidential - . part name. you might want to provide a different set of values each time. A GUI script can access a datapool when it is played back in Robot.1 Using Datapools with GUI Scripts If you are providing one or more values to the client application during GUI recording. and so forth. you might be filling out a data entry form and providing values such as order number. the GUI script can access the same datapool as other scripts. The SQADatapoolOpen command defines the access method to use for the datapool. For example. There is no DATAPOOL_CONFIG statement in a GUI script. There are differences in the way GUI scripts and sessions are set up for datapool access: • • You must add datapool commands to GUI scripts manually while editing the script in Robot.11.12 Debug menu The Debug menu has the following commands: Go Go Until Cursor Animate Pause Stop Set or Clear Breakpoints Clear All Breakpoints Step Over Step Into Step Out Note: The Debug menu commands are for use with GUI scripts only. you define a datapool for either type of script using TestManager in exactly the same way. Although there are differences in setting up datapool access in GUI scripts and sessions. Also. when a GUI script is played back in a TestManager suite. 18.18. Performance Testing Process & Methodology 155 - Proprietary & Confidential - . You can also compile scripts and SQABasic library source files manually. The compilation results can be viewed in the Build tab of the Output window.18. . you have made changes to global definitions that may affect all of your SQABasic files During compilation.13 Compiling the script When you play back a GUI script or VU script. Use in the current project this if. Robot compiles the script if it has been modified since it last ran. the Build tab in the Output window displays compilation results and error messages with line numbers for all compiled scripts and library source files. or when you debug a GUI script. file All scripts and library source files Click File ® Compile All. for example. To compile Do this The active script or library source Click File ® Compile. Performance Testing Process & Methodology 156 - Proprietary & Confidential - .18.14 Compilation errors After the script is created and compiled and errors fixed it can be executed. The results need to be analyzed in the Test Manager. most importantly. In Test Manager you can plan. The reporting tools help you track assets such as scripts. it provides the entire team with one place to go to determine the state of the system at any time. and logs.19 Rational Test Manager Test Manager is the open and extensible framework that unites all of the tools. Create and manage builds. And. implement. assets. log folders. builds. and data both related to and produced by the testing effort. design. It is where the team defines the plan it will implement to meet those goals. manage. Performance Testing Process & Methodology 157 - Proprietary & Confidential - . all participants in the testing effort can define and refine the quality goals they are working toward. execute tests and evaluate results. and run reports. and test documents. Create and manage data pools and data types When the script execution is started the following window will be displayed. With Test manager we can Create. and track test coverage and progress. Under this single framework. The folder in which the log is to stored and the log name needs to be given in this window. 1 Test Manager-Results screen In the Results tab of the Test Manager. you could see the results stored.19. From Test Manager you can know start time of the script and Performance Testing Process & Methodology 158 - Proprietary & Confidential - . Performance Testing Process & Methodology 159 - Proprietary & Confidential - . 2 Protocols Oracle SQL server HTTP Sybase Tuxedo SAP People soft 20.0 with service pack 5 Win2000 WinXP(Rational 2002) Win98 Win95 with service pack1 20.4 Markup languages HTML and DHTML pages on IE4.3 Web browsers IE4.0 or later Netscape navigator (limited support) 20. you have to download and run a free enabler program from Rational’s website.5 Delphi Power builder 5. To test other types of application. VC++ and basic web pages. For more details visit www.rational.0 and above The basic product supports Visual basic.5 Development environments Visual basic 4.0 or above Visual C++ Java Oracle forms 4. 20.0 or later.20 Supported environments 20.1 Operating system WinNT4.com Performance Testing Process & Methodology 160 Proprietary & Confidential - . 2 Performance Testing Process & Methodology 161 - Proprietary & Confidential - . A systematic approach like performance analysis is essential to extract maximum benefit from an existing system. the system is unable to scale to higher levels of performance. 21. and low utilization. are automated test scripts and an infrastructure to be used to execute automated tests for extended periods. When a system resource is exhausted. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. and utilization of the web site while simulating attempts by virtual users to simultaneously access the site. In such cases. or exhaustion of. Maintaining optimum Web application performance is a top priority for application developers and administrators. throughput. Performance analysis is also carried for various purposes such as: • During a design or redesign of a module or a part of the system. In general.1 What is Performance testing? Performance testing of an application is basically the process of understanding how the web application and its operating environment respond at various user load levels. some system resource. 21. high throughput. prior to execution.21 Performance Testing The performance testing is a measure of the performance characteristics of an application. • Post-deployment realities create a need for the tuning the existing system. One of the main objectives of performance testing is to maintain a web site with low latency. more than one alternative presents itself. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. Why Performance testing? Performance problems are usually the result of contention for. we want to measure the latency. the evaluation of a design alternative is the prime mover for an analysis. The main deliverables from such a test. Measure Application Server components performance under various loads. this infrastructure is a test bed. Fortunately. A comprehensive test strategy would define a test infrastructure to enable all these objectives be met.3 Performance Testing Objectives The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. This infrastructure is an asset and an expensive one too. the cost of failure becomes increasingly unbearable. performance is a secondary issue to features.. This helps to replace and focus efforts at improving overall system response. which can be re-used for other tests with broader objectives. Typically to debug applications. Measure the network delay between the server and clients Performance Testing Process & Methodology 162 - Proprietary & Confidential - . • As the user base grows. it is still an issue.e. To increase confidence and to provide an advance warning of potential problems in case of load conditions. 21. The performance testing goals are: • • • • • End-to-end transaction response time measurements. so it pays to make as much use of this infrastructure as possible. Measure database components performance under various loads.• Identification of bottlenecks in a system is more of an effort at troubleshooting. analysis must be done to forecast performance under load. When looking for errors in the application. developers would execute their applications using different execution streams (i. Monitor system resources under various loads. however. completely exercise the application) in an attempt to find errors. 21.4 Pre-Requisites for Performance Testing We can identify five pre-requisites for a performance test. Not all of these need be in place prior to planning or preparing the test (although this might be helpful), but rather, the list defines what is required before a test can be executed. First and foremost thing is The design specification or a separate performance requirements document should : • • • Defines specific performance goals for each feature that is instrumented. Bases performance goals on customer requirements. Defines specific customer scenarios. Quantitative, relevant, measurable, realistic, achievable requirements As a foundation to all tests, performance requirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated requirements. The following attributes will help to have a meaningful performance comparison. • Quantitative - expressed in quantifiable terms such that when response times are measured, a sensible comparison can be derived. • Relevant - a response time must be relevant to a business process. • Measurable - a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost. • Realistic - response time requirements should be justifiable when compared with the durations of the activities within the business process the system supports. • Achievable - response times should take some account of the cost of achieving them. Stable system A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. If the software crashes regularly, it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time before the software, middleware or operating systems crash. Performance Testing Process & Methodology 163 - Proprietary & Confidential - Realistic test environment The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. Often this is not possible. However, for the results of the test to be realistic, the test environment should be comparable to the actual production environment. Even with an environment which is somewhat different from the production environment, it should still be possible to interpret the results obtained using a model of the system to predict, with some confidence, the behavior of the target environment. A test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code, but is, however, useless for a performance test. 21.5 Performance Requirements Performance requirements normally comprise three components: • • • Response time requirements Transaction volumes detailed in ‘Load Profiles’ Database volumes Response time requirements When asked to specify performance requirements, users normally focus attention on response times, and often wish to define requirements in terms of generic response times. A single response time requirement for all transactions might be simple to define from the user’s point of view, but is unreasonable. Some functions are critical and require short response times, but others are less critical and response time requirements can be less stringent. Load profiles The second component of performance requirements is a schedule of load profiles. A load profile is the level of system loading expected to occur during a specific business scenario. Business scenarios might cover different situations when the users’ organization has different levels of activity or involve a varying mix of activities, which must be supported by the system. Performance Testing Process & Methodology 164 - Proprietary & Confidential - Database volumes Data volumes, defining the numbers of table rows which should be present in the database after a specified period of live running complete the load profile. Typically, data volumes estimated to exist after one year’s use of the system are used, but two year volumes or greater might be used in some circumstances, depending on the business application. Performance Testing Process & Methodology 165 - Proprietary & Confidential - 22 Performance Testing Process Requirements Collection Preparation Requirement Collection Test Plan Preparation Test Plan Test Design Preparation Test Design Scripting Test Scripts Test Execution Pre Test & Post Test Procedure Test Analysis Preliminary Report Activity Is Performan NO ce Deliverable Goal Reached? Internal Deliverable Performance Testing Process & Methodology 166 Proprietary & Confidential - YES Preparation of Reports Final Report Performance Testing Process & Methodology 167 - Proprietary & Confidential - . Following are the important performance test requirement that needs to be captured during this phase. It is important to understand as accurately and as objectively as possible the nature of load that must be generated.2 Deliverables Deliverable Sample Proprietary & Confidential - Performance Testing Process & Methodology 168 - .1. Hardware & Software components and Usage Model. Spike requirements.22.1 Phase 1 – Requirements Study This activity is carried out during the business and technical requirements identification phase. • Response Time • Transactions Per Second • Hits Per Second • Workload • No of con current users • Volume of data • Data growth rate • Resource usage • Hardware and Software configurations Activity Work items PerformanceStress • Understand the system and application model Test. Client and Server side parameters. Endurance Test • Browser Emulation and Automation Tool Selection • Decide on the type and mode of testing • Operational Inputs – Time of Testing.1.1 22. The objective is to understand the performance test requirements. Test. Load Test. • Server side and Client side Hardware and software Volume Test. 22. • Periodic status update to the client. Hardware Platform • Server Machines • Processors • Memory • Disk Storage • Load Machines configuration • Network configuration Software Configuration • Operating System • Server Software • Client Machine Software • Applications Activity Test Plan Preparation Work items • Hardware and Software Details • Test data • Transaction Traversal that is to be tested with sleep times.Requirement Collection RequirementCollectio n.2 Phase 2 – Test Plan The following configuration information will be identified as part of performance testing environment requirement identification. Performance Testing Process & Methodology 169 - Proprietary & Confidential - .doc 22. 22. the Load Generators used etc. • Setting up the monitoring servers • Setting up the data • Preparing all the necessary folders for saving the results as the test is over.doc Performance Testing Process & Methodology 170 - Proprietary & Confidential - ..3 Phase 3 – Test Design Based on the test strategy detailed test scenarios would be prepared. Synchronizations points) • Data Generation • Parameterization/ Data pooling Activity Test Design Generation Work items • Hardware and Software requirements that includes the server components . • Pre Test and Post Test Procedures 22.1 Deliverables Deliverable Test Plan Sample TestPlan. Checkpoints. During the test design period the following activities will be carried out: • Scenario design • Detailed test execution plan • Dedicated test environment setup • Script Recording/ Programming • Script Customization (Delay.1 Deliverables Deliverable Test Design Sample TestDesign.doc 22.2.3. Virtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy. The following artifacts will be produced during test execution period: • Test logs • Test Result Activity Work items Performance Testing Process & Methodology 171 - Proprietary & Confidential - .1 Deliverables Deliverable • Test Scripts Sample Sample Script. All the scenarios identified will be executed.5 Phase 5 – Test Execution The test execution will follow the various types of test as identified in the test plan.4.22.4 Phase 4 –Scripting Activity Scripting Work items • Browse through the application and record the transactions with the tool • Parameterization.doc 22. Error Checks and Validations • Run the script for single user for checking the validity of scripts 22. database throughput.6 Phase 6 – Test Analysis Activity Test Analysis Work items • Analyzing the run preliminary report.7 Phase 7 – Preparation of Reports The test logs and results generated are analyzed based on Performance under various load. Network Performance Testing Process & Methodology 172 Proprietary & Confidential - .Test Execution • • • • Starting the Pre Test Procedure scripts which includes start scripts for server monitoring. Transaction/second.doc Run Logs.6.1 Deliverables Deliverable • Test Analysis Sample Preliminary Report.doc 22. Think time. results and preparation of 22. Modification of automated scripts if necessary Test Result Analysis Report preparation for every cycle 22.1 Deliverables Deliverable • Test Execution Sample Time Sheet.doc 22.5. Network throughput. The following performance test reports/ graphs can be generated as part of performance testing:• Transaction Response time • Transactions per Second • Transaction Summary graph • Transaction performance Summary graph • Transaction Response graph – Under load graph • Virtual user Summary graph • Error Statistics graph • Hits per second graph • Throughput graph • Down load per second graph • Based on the Performance report analysis. Transaction Distribution and Data handling. Activity Preparation of Reports Work items • Preparation of final report. Resource usage. Manual and automated results analysis methods can be used for performance results analysis. database organization. network capacity or routing. suggestions on improvement or tuning will be provided to the design team: • Performance improvements to application software. • Upgrades to client or server hardware. • Changes to server system parameters.7.doc Performance Testing Process & Methodology 173 - Proprietary & Confidential - . 22. middleware.1 Deliverables Deliverable • Final Report Sample Final Report.delay. Metrics. Performance issues must be identified as soon as possible to prevent further degradation. Workload • Not trivial • Biased Goals • ‘To show that OUR system is better than THEIRS” • Analysts = Jury • Unsystematic Approach • Analysis without Understanding the Problem • Incorrect Performance Metrics • Unrepresentative Workload • Wrong Evaluation Technique • Overlook Important Parameters • Ignore Significant Factors • Inappropriate Experimental Design • Inappropriate Level of Detail • No Analysis • Erroneous Analysis • No Sensitivity Analysis • Ignoring Errors in Input • Improper Treatment of Outliers • Assuming No Change in the Future • Ignoring Variability • Too Complex Analysis • Improper Presentation of Results • Ignoring Social Aspects • Omitting Assumptions and Limitations 22. we should run the performance test suite under controlled conditions from build to build. Performance Testing Process & Methodology 174 Proprietary & Confidential - .8 Common Mistakes in Performance Testing • No Goals • No general purpose model • Goals =>Techniques. This typically means measuring performance on "clean" test environments. Also.22.9 Benchmarking Lessons Ever build needs to be measured. We should run the automated performance test suite against every build and compare the results against previous results. Design the performance test suite to measure response times and not to identify bugs in the product. The Web Capacity Analysis (WebCAT) tool provides Web server performance analysis. the tool can also assess Internet Server Application Programming Interface and application server provider (ISAPI/ASP) applications. it is important to define concrete performance goals. Without defined performance goals or requirements. Strive to achieve the majority of the performance goals early in the product development cycle because: • • Most performance issues require architectural change. keep the performance test suite fairly static throughout the product development cycle. You should reuse automated performance tests Automated performance tests can often be reused in many other automated test suites. Therefore. The performance tests should not be used to find functionality-type bugs. incorporate the performance Performance Testing Process & Methodology 175 Proprietary & Confidential - . without a clear purpose. Achieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. testers must guess. If we decide to make performance a goal and a measure of the quality criteria for release. The performance tests should be modified consistently. Establish incremental performance goals throughout the product development cycle. Performance testing of Web services and applications is paramount to ensuring an excellent customer experience on the Internet.Performance goals needs to be ensured. the management team must decide to enforce the goals. Creating an automated test suite to measure performance is time-consuming and labor-intensive. Therefore. For example. Design the build verification test (BVT) suite to ensure that no new bugs are injected into the build that would prevent the performance test suite from successfully completing. Performance is known to degrade slightly during the stabilization phase of the development cycle. perturb only one variable at a time for each build. If the design or requirements change and you must modify a test. it is a software architectural problem. at how to instrument tests to best measure various response times. Significant changes to the performance test suite skew or make obsolete all previous data. All the members in the team should agree that a performance issue is not just a bug. The tools used for performance testing are Loadrunner 6.5x Performance Testing Process & Methodology 176 - Proprietary & Confidential - .test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. if the data is not going to be used in a meaningful way to make improvements in the engineering cycle. it is probably wasted data. En sure that you know what you are measuring and why. Testing for most applications will be automated.5 and Webload 4. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. Although secondary metrics look good on wall charts and in reports. Tools used for testing would be the tool specified in the requirement specification. 2 WebLoad 4. client certificates. Repeatable and measurable load to execute your client/server system just as real users would.1 LoadRunner 6. Performance Testing Process & Methodology 177 - Proprietary & Confidential - .1. Webload can also gather information server’s performance monitor. Webload generates load by creating virtual clients that emulate network traffic. LoadRunner enables you to test your system under controlled and peak load conditions. including cookies.0 and 1. load and functional tests or by running them individually. per-transaction and perinstance level from the computers that are generating the load.Webload displays them in graphs and tables in real-time and you can save and export the results when the test is finished.5 LoadRunner is Mercury Interactive’s tool for testing the performance of client/server systems. To generate load.5 Webload is a testing tool for testing the scalability. You create test scripts (called agendas) using Java Scripts that instruct those virtual clients about what to do. persistent connections and chunked transfer coding. It can measure the performance of your application under any load conditions. proxies. Use WebLoad to test how well your web site will perform under real-world conditions by combining performance. LoadRunner runs thousands of Virtual Users that are distributed over a network. SSL. Webload supports HTTP1.23 Tools 23. LoadRunner’s in depth reports and graphs provide the information that you need to evaluate the performance of your client/server system. 23. functionality and performance of Web-based applications – both Internet and Intranet. TSL. authentifications. When Webload runs the test. it gathers results at a per-client. Using a minimum of hardware resources. these Virtual users provide consistent. You can watch the results as they occur. Performance Testing Tools - summary and comparison This table lists several performance testing tools available on the market. For your convenience we compared them based on cost and OS required. emphasizing easenumber of-use. html Cost OS Description Price ($) Windows NT.webperf center. passwords. and any other parameter to Proprietary & Confidential - Performance Testing Process & Methodology 178 - .com/loadtesting. Can automatically handle variations in session-specific items such as cookies. records and 1000 allows viewing of 11995exact bytes flowing 5000 between browser and server. Supports all of virtual Linux Solaris browsers and web users: servers. Modem simulation allows each virtual user to be bandwidth limited. simulates 1400-100 up to 200 users per 2495-200 playback machine at 4995-300 various connection 7995speeds. Tool Name Web Performance Trainer URL http://www. Load test tool per Windows 2000. usernames. com Benchmark Factory http://www. users and host 100 WIN2000 machines for tests 29995representing real 250 user traffic. Performance Testing Process & Methodology 179 - Proprietary & Confidential . Notes: downloadable. Real-time monitors and analysis Notes: downloadable. includes record/playback capabilities. and will expire in 2 weeks (may be extend) Mercury's load/stress testing tool. 'Scenario users: IBM AIX. combines virtual 17995Windows NT. Includes - Astra LoadTest http://www. will emulate 25 users. evaluation version $ Windows NT. integrated spreadsheet parameterizes Price ($) recorded input to per exercise application number SunOS. Windows2000 E-commerce testing tool Client/Server Solutions.benchmark factory.com load from Inc. of data.simulate multiple virtual users. Builder' visually 9995-50 NCR. with a wide variety of virtual HP-UX. 'Content Check' checks for failures under heavy load.astratryand buy. com $ Performance Testing Process & Methodology 180 - . SSL.record/playback. Windows 2000 multiple platforms Solaris. dynamic HTML. after submitting information A Page with suggestion to apply for next infos to closest dealers appeared Supports recording of SSL sessions. Also includes predeveloped industry standard benchmarks such as AS3AP. Wisconsin. IBM's DB2 CLI. AIX Notes: downloadable. Informix. Notes: downloadable (?). proxies. WebStone. password Win95/98 authentication. user sessions. and others. web form processing. Sybase System 11. cookies. Oracle 7 and 8. Set-Query. ODBC.MS SQL Server. Evaluation version does not support SSl Proprietary & Confidential - Radview's WebLoad http://www. Includes optimized database drivers for vendorneutral comparisons . Windows NT. scripting.radview. cookies. Not downloadable Forecast http://www. Notes: request a cd only. adjustable delay between requests Notes: one of the advanced tools in the listing… Rational Suite Performance http://www. handles dynamic web pages. Windows2000 Microsoft stress test tool created by Microsoft's Internal Tools Group (ITG) and subsequently made available for external use.facilita.uk $ Unix Load testing tool from Facilita Software for web.co. 'LoadSmart Scheduling' capabilities allow Windows NT. network. script recording from browser. scenarios and Unix randomized transaction sequences.MS Web Application Stress Test http://homer. Includes record/playback.rational. products Rational SiteLoad $ Rational's client/server and web performance testing tool.rte.com/ Studio.microsoft. SSL. and database systems - Performance Testing Process & Methodology 181 - Proprietary & Confidential . complex usage Windows2000. client-server. com Free Windows NT. com/products/eload_ index.acme. Load test tool from RSW geared to testing web applications under load and testing scalability of Ecommerce applications. For use in conjunction with test scripts from their e-Tester functional test tool.rswsoftware. will compile on any UNIX platform Notes: unsupportable (?). broken download link.Notes: downloadable not Zeus http://webperf.zeus. free cd request.com/ software/http_load Free Unix Performance Testing Process & Methodology 182 - Proprietary & Confidential . co. Allows on-the-fly changes and has real-time reporting capabilities. Notes: downloadable. evaluation copy Free load test application to generate web server loads Notes: free and - E-Load http://www.uk/intro.shtml $ Win95/98/ Windows NT HTTP-Load http://www.html Free Unix Free web benchmarking/load testing tool available as source code. Notes: downloadable AIX. Linux Evaluation copy avail. DBLib or load test CBLib. player ODBC.easy. and Win95/NT char-based systems. WebLoad Proprietary & Confidential QALoad http://www. capture/playback and scripting SunOS/Solaris. and Windows 2000.oclc. Segue's Silk web Windows 2000 testing toolset. and Web Notes : free cd request Load and performance testing component of Windows NT.segue. language.compuware.ca. also includes Windows 98.htm $ WEBArt http://www. web.com/ html/s_solutions/s_perf ormer/s_performer. Telnet. AIX. works with such Unix. com/products/auto/ releases/QALoad.0. Tool for load testing of up to 100-200 simulated users. middleware as: Windows NT SQLnet. functional and Windows NT regression testing 4.org/ webart $ Webload http://www. Notes: no download. SQL Server. manager.com/ products/platinum/ $ Exam Performance Testing Process & Methodology 183 - - . Compuware's QALoad for load/stress testing of database. Final Windows NT. capabilities.htm $ SilkPerformer http://www. microsoft. and scalability of Web applications. includes link testing capabilities. Notes: downloadable. It generates and monitors load stress tests . can simulate up to 1.and assesses Web application performance under user-defined variable system loads. from Microsoft for Windows 2000 load testing of MS IIS on NT Load testing tool.0. Sun Solaris integration and predeployment testing ensures the reliability. performance. WCAT load com/workshop/server/ test tool toolbox/wcat. Load scenarios can include unlimited numbers of virtual users on one or more load servers.htm Windows 95. period Microsoft http://msdn.redhillnet works. Windows 2000 Proprietary & Confidential Performance Testing Process & Methodology 184 - . as well as single users on multiple client workstations.appdev/fe_iltps.asp Webspray http://www.com Free Web load test tool Windows NT. ($99 with Windows NT discount) 4.which can be recorded during a Web session with any browser .000 clients from a single - $199 Windows 98. 15day eval. Performance Testing Process & Methodology 185 - Proprietary & Confidential - .3 Architecture Benchmarking • Hardware Benchmarking . 30 eval. This is achieved through software benchmark test. redirects Notes: downloadable. Notes: not downloadable Load testing and capture/playback tools from Technovations. Windows 2000 23.htm $ Win95(98). Windows NT.Hardware benchmarking is performed to size the application with the planned Hardware platform. also supports multiple IP addresses with or without aliases.com/home.technova tions.Defining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources.IP address. It is significantly different from capacity planning exercise in that it is done after development and before deployment • Software Benchmarking . cookies. WebSizr load testing tool supports authentication. WebSizr. WebCorder http://www. period. This methodology specification provides information on the appropriate script of pages or transactions for the user. • Type of workload: in order to properly achieve the goals of the test.4 General Tests What follows is a list of tests adaptable to assess the performance of most systems. • What to look for: contains information on behaviors. allowing one to use a wide range of tools to conduct the assessments. issues and errors to pay attention to during and after the test. • Time estimate: a rough estimate of the amount of time that the test may take to complete. along with some simple background information that might be helpful during testing. • Purpose: explains the value and focus of the test. Performance Testing Process & Methodology 186 - Proprietary & Confidential - . The methodologies below are generic. • Methodology: a list of suggested steps to take in order to assess the system under test. Methodology Definitions • Result: provide information about what the test will accomplish.23. • Constraints: details any constraints and values that should not be exceeded during testing. each test requires a certain type of workload. . throughput etc. disk space and network usage on individual web. metrics such as response times for transactions. bandwidth in bits per second. the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency. Third party monitoring tools are also used based on the requirement. • Utilization: The fraction of time the resource is busy servicing requests. The response time at maximum throughput is too high. • Stretch Factor: The ratio of the response time with single user to that of concurrent users. • Throughput: Rate (requests per unit of time) Examples: • Jobs per second • Requests per second • Millions of Instructions Per Second (MIPS) • Millions of Floating Point Operations Per Second (MFLOPS) • Packets Per Second (PPS) • Bits per second (bps) • Transactions Per Second (TPS) • Capacity: Nominal Capacity: Maximum achievable throughput under ideal workload conditions. application and database servers and make sure those numbers recede as load decreases.g. Cognizant has built custom monitoring tools to collect the statistics. • Average Fraction used for memory. • Usable capacity: Maximum throughput achievable without exceeding a prespecified response-time limit • Efficiency: Ratio usable capacity to nominal capacity. memory. It is also important to monitor and collect the statistics such as CPU utilization. E.. Performance Testing Process & Methodology 187 Proprietary & Confidential - . Or. HTTP requests per second. As tests are executed.24 Performance Metrics The Common Metrics selected /used during the performance testing is as below • Response time • Turnaround time = the time between the submission of a batch job and the completion of its output. should be collected. plan. It is very typical of the project manager to be overtaken by time and resource pressures leading not enough budget Performance Testing Process & Methodology 188 Proprietary & Confidential - .1 Client Side Statistics • Running Vusers • Hits per Second • Throughput • HTTP Status Code • HTTP responses per Second • Pages downloaded per Second • Transaction response time • Page Component breakdown time • Page Download time • Component size Analysis • Error Statistics • Errors per Second • Total Successful/Failed Transactions 24. if executed systematically with appropriate planning. Cache Hit Ratio • Application Server Resources–Heap size. SQL Queries • Transaction Profiling • Code Block Analysis 24.24. JDBC Connection Pool • Database Server Resources–Wait Events. execution. Without the rigor described in this paper.3 Network Statistics • Bandwidth Utilization • Network delay time • Network Segment delay time 24. However. executing performance testing does not yield anything more than finding more defects in the system. design.4 Conclusion Performance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i. performance testing can unearth issues that otherwise cannot be done through mainstream testing.Processor Utilization.2 Server Side Statistics • System Resources . Memory and Disk Space • Web Server Resources–Threads.e strategy. analysis and reporting. Fortunately. The discipline helps businesses succeed in leveraging Web technologies to their best advantage. Performance Testing Process & Methodology 189 - Proprietary & Confidential - . it may be too late in the software development cycle to correct serious performance issues. the consequences of which could be disastrous to the final system. enabling new business opportunity lowering transaction costs and strengthening profitability.being allocated for performance testing. lifecycle-focused approach. Once these solutions are properly adopted and utilized. Web-enabled applications and infrastructures must be able to execute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. business can confidently and proactively execute strategic corporate initiatives for the benefit of shareholders and customers alike. the system should have been architected and designed for meeting the required performance goals. By continuously testing and monitoring the performance of critical software applications. robust and viable solutions exist to help fend off disasters that result from poor performance. There is another flip side of the coin. Load testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment. Before testing the system for performance requirements. businesses can begin to take charge and leverage information technology assets to their competitive advantage. However there is an important point to be noted here. Automated load testing tools and services are available to meet the critical need of measuring and optimizing complex and dynamic application and infrastructure performance. If not. leveraging an ongoing. 25 Load Testing Load Testing is creation of a simulated load on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus testing the systems ability to support such workload.1 Why is load testing important ? Load Testing increases the uptime for critical web applications by helping you spot the bottlenecks in the system under large user stress scenarios before they happen in a production environment 25. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications.2 When should load testing be done? Load testing should be done when the probable cost of the load test is likely less than the cost of a failed application deployment. Thus a load testing is accomplished by stressing the real application under simulated load provided by virtual users. performance testing to check if it offers an acceptable response time and load testing to see what hardware or software configuration will be required to provide acceptable response time and handle the load that will created by the real users of the system 25. Performance Testing Process & Methodology 190 - Proprietary & Confidential - . Hence for businesses capacity testing would be the benchmark to say that the maximum loads of concurrent users the site can sustain before the system fails. It can be load testing or stress testing or capacity testing.2 User Scripts Once the analysis of the system is done the next step would be the creation of user scripts. test run time. Evaluation of the requirements and needs of a system. All the business process should be recorded end to end so that these transactions will assist in breakdown of all actions and the time it takes to measure the performance of business process. Settings can configure the number of concurrent connections. Stress testing is nothing but load testing over extended periods of time to validate an application’s stability and reliability. Performance Testing Process & Methodology 191 Proprietary & Confidential - .1 System Analysis This is the first step when the project decides on load testing for its system. Hence throttling bandwidth can emulate dial up connections at varying modem speeds (28.. A script recorder can be used to capture all the business processes into test scripts and this more often referred as virtual users or virtual user scripts.54M) etc. Similarly capacity testing is used to determine the maximum number of concurrent users that an application can manage. hits per second etc. prior to load testing will provide more realistic test conditions. 26. Finally it should also be taken into consideration of the test tool which supports load testing by determining its multithreading capabilities and the creation of number of virtual users with minimal resource consumption and maximal virtual user count. System response times also can vary based on the connection speed.26 Load Testing Process 26. A virtual user is nothing but an emulated real user who drives the real application as client.. For this one should know all key performance goals and objectives like number of concurrent connections. The objective is to determine whether the site can sustain a requested number of users with acceptable response times. Another important analysis of the system would also include the appropriate strategy for testing applications.8 Kbps or 56. Load Testing is used to test the application against a requested number of users.6 Kbps or T1 (1. 26. follow HTTP redirects etc.3 Settings Run time settings should be defined the way the scripts should be run in order to accurately emulate real users. the webs server. The reports generated can be anything ranging from Number of hits.26. But if the tools support real time monitoring then testers would be able to view the application performance at any time during the test. such as CPU. memory. while maintaining adequate response times. number of test clients.6 Conclusion Load testing is the measure of an entire Web application's ability to sustain a number of simultaneous users and transactions. the application server. Two common methods for implementing this load testing process are manual and automated testing.5 Analyzing Results The last but most important step in load testing is collecting and processing the data to resolve performance bottlenecks.4 Performance Monitoring Every component of the system needs monitoring :the clients. It is the only way to accurately test the end-to-end performance of a Web site prior to going live. Performance data on an web application can be gathered by stressing the website and measuring the maximum requests per second that the web server can handle. requests per second. 26. the network. This will result in instantly identifying the performance bottle necks during load testing.. After these changes are made the tests must re run the load test scenarios to verify adjustments. the database etc.. Thus running the load test scenario and monitoring the performance would accelerate the test process thereby producing a more stable application 26. The next step is to determine which resource prevents the requests per second from going higher. or backend dependencies. Manual testing would involve Performance Testing Process & Methodology 192 Proprietary & Confidential - . Load Testing with WAST Web Application Stress is a tool to simulate large number of users with a relatively small number of client machines. Hence analyzing the results will isolate bottle necks and determine which changes are needed to improve the system performance. socket errors etc. manual testing is not a very practical option. The testing tools typically use three major components to execute a test: • • • A console. In this way. which are used to run the virtual users With automated load testing tools. Performance Testing Process & Methodology 193 - Proprietary & Confidential - . drives and manages the load Virtual users. tests can be easily rerun any number of times and the results can be reported automatically. automated load testing is the preferred choice for load testing a Web application. Today.• • • • Coordination of the operations of users Measure response times Repeat tests in a consistent way Compare results As load testing is iterative in nature. Plus. which organizes. they minimize the risk of human error during testing. automated testing tools provide a more cost-effective and efficient solution than their manual counterparts. performing a business process on a client application Load servers. For this reason. the performance problems must be identified so that system can be tuned and retested to check for bottlenecks. rather than trying to implement every testing type. All these ‘testing’ activities are important and each plays an essential role in the overall effort but.). we define stress testing as performing random operational sequences at larger than normal volumes. code. for example. random testing. Adding some randomization to these steps will help find more defects. For more details about these types of defects or how we were able to detect them. pick multiple testing types that will provide the best coverage for the product to be tested. expert user testing (like beta testing but inhouse). Further. these testing activities do little to quantify the robustness of the application or determine what may happen under abnormal circumstances. test plans. The system is put through its paces to find where it may fail. software functional requirements. system testing (also known as functional testing). security testing. you can take a common set of actions for your system and keep repeating them in an attempt to break the system. stress testing is often confused with load testing and/or volume testing. You need to know what exactly was happening when the system failed.000 attempts?[1] Note that there are many other types of testing which have not mentioned above. Did the system lock up with 100 attempts or 100. As a first step. Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of “breaking the product”. refer to the section ‘Typical Defects Found by Stress Testing’.27 Stress Testing 27. unit testing. etc. etc. Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory leaks. How long can your application stay functioning doing this operation repeatedly? To help you reproduce your failures one of the most important things to remember to do is to log everything as you proceed. Stress testing can imply many different types of testing depending upon the audience. etc. Even in literature on software testing. and then master these testing types. risk based testing. none of these specifically look for problems like memory and resource management. and it seems they agree. We try to fill this gap in testing by using stress testing. deadlocks.1 Introduction to Stress Testing This testing is accomplished through reviews (product requirements. We have found. that it is best to review what needs to be tested. and configuration conflicts. Performance Testing Process & Methodology 194 Proprietary & Confidential - . smoke tests. software asserts. For our purposes. at faster than normal speeds and for longer than normal periods of time as a method to accelerate the rate of finding defects and verify the robustness of our product. software designs. Our applications are required to operate for long periods of time with no significant loss of performance or reliability. turning knobs. Previously. “banging” on the keyboard etc. video recorders and the like to capture user interactions with varying (often poor) levels of success.which is often referred to as “monkey” testing.. where the tester uses no guide or script and no log is recorded. it’s often impossible to repeat the steps executed before a problem occurred.Table 1 provides a summary of some of the strengths and weaknesses that we have found with stress testing. Attempts have been made to use keyboard spyware. In this kind of stress testing. Table 1 Stress Testing Strengths and Weaknesses Strengths Weakness Find defects that no other type of test Not real world situation would find Using randomization increase coverage Defects are not always reproducible Test the robustness of the application One sequence of operations may catch a problem right away. Performing stress manually is not feasible and repeating the test for every software release is almost impossible. We have found that stress testing of a software application helps in accessing and increasing the robustness of our applications and it has become a required activity before every software release. and it will provide you with more than just a mirror of your manual test suite. we had attempted to stress test our applications using manual techniques and have found that they were lacking in several respects. Does not test correctness of system response deadlocks. In this kind of testing. in order to find defects. but use another sequence may never find the problem Helpful at finding memory leaks.2 Background to Automated Stress Testing Stress testing can be done manually . you get a return on your investment quickly.poking buttons. Some of the weaknesses of manual stress testing we found were: Performance Testing Process & Methodology 195 - Proprietary & Confidential - . so this is a clear example of an area that benefits from automation. software asserts. and to user input configuration conflicts 27. One of the problems with “monkey” testing is reproducibility. the tester would use the application “aimlessly” like a monkey . it becomes easy to execute multiple stress tests simultaneously across more than one product at the same time. 3. Depending on how the stress inputs are configured stress can do both ‘positive’ and ‘negative’ testing. With automated stress testing. Since the stress test is automated. Manual techniques cannot provide the kind of intense simulation of maximum user interaction over time. if a valid input is in seconds. Performance Testing Process & Methodology 196 - Proprietary & Confidential - . Even though there are clearly advantages to automated stress testing. People tend to do the same things in the same way over and over so some configuration transitions do not get tested. For example. and to perform data logging. The stress test tool is implemented to determine the applications’ configuration. positive testing would test 0 to 59 and negative testing would try –1 to 60. Positive testing is when only valid parameters are provided to the device under test. we have found that each time the product application changes we most likely need to change the stress tool (or more commonly commands need to be added to/or deleted from the input command set). the stress test is performed under computer control. it still has its disadvantages. 2. Also.1. Manual testing does not provide the breadth of test coverage of the product features/commands that is needed. For example. then the output command sequence also changes given pseudo-randomization. so reproducing failures is nearly impossible. Manual testing generally does not allow for repeatability of command sequences. whereas negative testing provides both valid and invalid parameters to the device as a way of trying to break the system under abnormal circumstances. 4. to execute all valid command sequences in a random order. Table 2 provides a summary of some of these advantages and disadvantages that we have found with automated stress testing. if the input command set changes. Manual testing does not perform automatic recording of discrete values with each command sequence for tracking memory utilization over time – critical for detecting memory leaks. Humans can not keep the rate of interaction up high enough and long enough. etc. The automated stress test randomizes the order in which the product features are accessed. Automated stress testing exercises various features of the system. Continuously log the sequence of events so that issues can be reliably reproduced after a system failure. Simulate user interaction for long periods of time (since it is computer controlled we can exercise the product more than a user can). our challenge then was to create an automated stress test tool that would: 1. at a rate exceeding that at which actual end-users can be expected to do. 27.Table 2 Automated Stress Testing Advantages and Disadvantages Advantages Disadvantages Automated stress testing is performed Requires capital equipment and under computer control development of a stress test tool Capability to test all product application Requires maintaince of the tool as the command sequences product application changes Multiple product applications can be Reproducible stress runs must use the same supported by one stress tool input command set Uses randomization to increase coverage. In this way. 4. and for durations of time that exceed typical use. 2. Defects are not always reproducible even tests vary with new seed values with the same seed value Repeatability of commands and parameters Requires test application information to be help reproduce problems or verify that kept and maintained existing problems have been resolved Informative log files facilitate investigation May take a long time to execute of problem In summary. Stress the resource and memory management features of the system.3 Automated Stress Testing Implementation Performance Testing Process & Methodology 197 Proprietary & Confidential - . non-typical sequences of user interaction are tested with the system in an attempt to find latent defects not detectable with other techniques. automated stress testing overcomes the major disadvantages of manual stress testing and finds defects that no other testing types can find. To take advantage of automated stress testing. 5. Record the memory in use over time to allow memory management analysis. 3. Provide as much randomization of command sequences to the product as possible to improve test coverage over the entire set of possible features/commands. individual windows and controls may or may not be visible and/or active depending on the state of the device. then the parameters are also enumerated by randomly generating a unique command parameter. etc. each unique seed value will create the same sequence of commands with the same parameters each time the stress test is executed. Universal Serial Bus (USB). for example. and retrieve data in a variety of application areas like manufacturing. the products provide programmable interfaces.4 Programmable Interfaces These interfaces have allowed users to setup. 27. If a command has parameters.Automated stress testing implementations will be different depending on the interface to the product application. or the stress test can send multiple commands at the same time. on a manufacturing line where the product is used 24 hours a day. and are required to operate for long periods of time. Programmable interface stress testing is performed by randomly selecting from a list of individual commands and then sending these commands to the device under test (DUT) through the interface. the stress test can send the commands across multiple interfaces simultaneously. Testing all possible combinations of commands on these products is practically impossible using manual testing methods. Performance Testing Process & Methodology 198 - Proprietary & Confidential - . General Purpose Interface Bus (GPIB). and service. 2) Graphical User Interfaces (GUI’s): Interfaces that use the Windows model to allow the user direct control over the device. other variations of the automated stress test can be performed. For example. Ethernet. Each command is also written to a log file which can be then used later to reproduce any defects that were uncovered. The interfaces fall into two main categories: 1) Programmable Interfaces: Interfaces like command prompts. RS232. control. For additional complexity. which generally support a large number of commands (1000+). To meet the needs of these customers. (if the product supports it). research and development. the stress test can vary the rate at which commands are sent to the interface. By using a pseudo-random number generator. The types of interfaces available to the product drive the design of the automated stress test tool. that accept strings representing command functions without regard to context or the current state of the device. 7 days a week. Figure 1 shows a block diagram. the flow of each operation can be important. If the filename already exists. An example would be a ‘HELP’ menu item. but you have to store “click the ‘HELP’ menu item for the particular window”. since accessing the GUI is not as simple as sending streams of command line input to the product application. Graphical User Interfaces have become dominant and it became clear that we needed a means to test these user interfaces analogous to that which is used for programmable interfaces. each control can be uniquely identified). for example. The main interactions for the stress test tool include an input file and Device Under Test (DUT). It is necessary to store not only the object recognition method for the control. However. The input file is used here to provide the stress test tool with a list of all the commands and interactions needed to test the DUT.e. Performance Testing Process & Methodology 199 - Proprietary & Confidential - . In this case. which can be used to illustrate some of the stress test tool interactions. There may be multiple windows open with a ‘HELP’ menu item. certain property values. you need to group these six operations together as one “big” operation in order to correctly exercise this particular ‘OK’ button.5 Graphical User Interfaces In recent years. Select Target Directory from tree control 4. so it is not sufficient to simply store “click the ‘HELP’ menu item”.27. Select ‘File->Save As…’ 3. Type a valid filename into the edit-box 5. but also information about its parent window and other information like its expected state. a typical confirm file overwrite dialog box for a ‘File->Save As…’ filename operation is not available until the following sequence has been executed: 1. Additionally. Many controls are not visible until several levels of modal windows have been opened and/or closed. Click the ‘SAVE’ button 6.6 Data Flow Diagram A stress test tool can have many different interactions and be implemented in many different ways. etc. Set Context to the Main Window 2. With this information it is possible to uniquely define all the possible product application operations (i. 27. a new approach was needed. either confirm the file overwrite by clicking the ‘OK’ button in the confirmation dialog or click the cancel button. two different techniques are used: 1. Instead. Performance Testing Process & Methodology 200 - Proprietary & Confidential - . To isolate the subsystem. This loop continues until a set number of interactions have occurred or the DUT crashes.7 Techniques Used to Isolate Defects Depending on the type of defect to be isolated. data logging (commands and test results) and system resource monitoring are very beneficial in helping determine what the DUT was trying to do before it crashed and how well it was able to manage its system resources. The basic flow control of an automated stress test tool is to setup the DUT into a known state and then to loop continuously selecting a new random interaction. 2. 27. then continue to reduce the number of commands in the playback until the defect is isolated. trying to execute the interaction. Continue this process until the subsystem causing the reduction in resources is identified.System Resource Monitor Input File Stress Test Tool DUT Log command Sequence Log Test Results Figure 1: Stress Test Tool Interactions Additionally. If the defect still occurs. System crashes – (asserts and the like) do not try to run the full stress test from the beginning. start removing subsystems from the database and re-run the stress test while monitoring the system resources. and logging the results. unless it only takes a few minutes to produce the defect. back-up and run the stress test from the last seed (for us this is normally just the last 500 commands). Diminishing resource issues – (memory leaks and the like) are usually limited to a single subsystem. This technique is most effective after full integration of multiple subsystems (or. modules) has been achieved. As the defect reoccurs. continue to add additional data to the defect description. we know that the robustness of our applications increases proportionally with the amount of time that the stress test will run uninterrupted. especially those that reside around page faults. These defects should still be logged into the defect tracking system.Some defects are just hard to reproduce – even with the same sequence of commands. Performance Testing Process & Methodology 201 - Proprietary & Confidential - . you will be able to detect a pattern. but overall. Some defects just seem to be un-reproducible. Eventually. over time. isolate the root cause and resolve the defect. which is an indirect measure of quality. Test Coverage analysis is the process of: • • • Finding areas of a program not exercised by a set of test cases. without regard to how it works internally. Code coverage analysis is a structural testing technique (white box testing). which compares test program behavior against a requirements specification. A test coverage analyzer automates this process. Functional testing examines what the program accomplishes.28 Test Case Coverage 28. Coverage analysis requires access to test program source code and often requires recompiling it with a special command. Structural testing examines how the program works. The two terms are synonymous. Here is a description of some fundamental measures and their strengths and weaknesses Performance Testing Process & Methodology 202 Proprietary & Confidential - . and not the quality of the actual product. 28. Test coverage analysis is sometimes called code coverage analysis. Structural testing compares test program behavior against the apparent intention of the source code. Also an optional aspect of test coverage analysis is: • Identifying redundant test cases that do not increase coverage. and Determining a quantitative measure of code coverage. taking into account possible pitfalls in the structure and logic.1 Test Coverage Test Coverage is an important measure of quality for software systems. The academic world more often uses the term "test coverage" while practitioners more often use "code coverage". Creating additional test cases to increase coverage.2 Test coverage measures A large variety of coverage measures exist. Test coverage analysis can be used to assure quality of the set of tests. This contrasts with functional testing (black-box testing). 28.3 Procedure-Level Test Coverage Probably the most basic form of test coverage is to measure what procedures were and were not executed during the test suite. This simple statistic is typically available from execution profiling tools, whose job is really to measure performance bottlenecks. If the execution time in some procedures is zero, you need to write new tests that hit those procedures. But this measure of test coverage is so coarse-grained it's not very practical. 28.4 Line-Level Test Coverage The basic measure of a dedicated test coverage tool is tracking which lines of code are executed, and which are not. This result is often presented in a summary at the procedure, file, or project level giving a percentage of the code that was executed. A large project that achieved 90% code coverage might be considered a well-tested product. Typically the line coverage information is also presented at the source code level, allowing you to see exactly which lines of code were executed and which were not. This, of course, is often the key to writing more tests that will increase coverage: By studying the unexecuted code, you can see exactly what functionality has not been tested. 28.5 Condition Coverage and Other Measures It's easy to find cases where line coverage doesn't really tell the whole story. For example, consider a block of code that is skipped under certain conditions (e.g., a statement in an if clause). If that code is shown as executed, you don't know whether you have tested the case when it is skipped. You need condition coverage to know. There are many other test coverage measures. However, most available code coverage tools do not provide much beyond basic line coverage. In theory, you should have more. But in practice, if you achieve 95+% line coverage and still have time and budget to commit to further testing improvements, it is an enviable commitment to quality! 28.6 How Test Coverage Tools Work To monitor execution, test coverage tools generally "instrument" the program by inserting "probes". How and when this instrumentation phase happens can vary greatly between different products. Performance Testing Process & Methodology 203 Proprietary & Confidential - Adding probes to the program will make it bigger and slower. If the test suite is large and time-consuming, the performance factor may be significant. 28.6.1 Source-Level Instrumentation Some products add probes at the source level. They analyze the source code as written, and add additional code (such as calls to a code coverage runtime) that will record where the program reached. Such a tool may not actually generate new source files with the additional code. Some products, for example, intercept the compiler after parsing but before code generation to insert the changes they need. One drawback of this technique is the need to modify the build process. A separate version namely, code coverage version in addition to other versions, such as debug (un optimized) and release (optimized) needs to be maintained. Proponents claim this technique can provide higher levels of code coverage measurement (condition coverage, etc.) than other forms of instrumentation. This type of instrumentation is dependent on programming language -- the provider of the tool must explicitly choose which languages to support. But it can be somewhat independent of operating environment (processor, OS, or virtual machine). 28.6.2 Executable Instrumentation Probes can also be added to a completed executable file. The tool will analyze the existing executable, and then create a new, instrumented one. This type of instrumentation is independent of programming language. However, it is dependent on operating environment -- the provider of the tool must explicitly choose which processors or virtual machines to support. 28.6.3 Runtime Instrumentation Probes need not be added until the program is actually run. The probes exist only in the in-memory copy of the executable file; the file itself is not modified. The same executable file used for product release testing should be used for code coverage. Because the file is not modified in any way, just executing it will not automatically start code coverage (as it would with the other methods of instrumentation). Instead, the code coverage tool must start program execution directly or indirectly. Alternatively, the code coverage tool will add a tiny bit of instrumentation to the executable. This new code will wake up and connect to a waiting coverage tool whenever the program executes. This added code does not affect the size or performance of the executable, and does nothing if the coverage tool is not waiting. Performance Testing Process & Methodology 204 Proprietary & Confidential - Like Executable Instrumentation, Runtime Instrumentation is independent of programming language but dependent on operating environment. 28.7 Test Coverage Tools at a Glance There are lots of tools available for measuring Test coverage. Company Product OS Lang Bullseye CompuWare BullseyeCoverage DevPartner Win32, Unix Win32 Win32, Unix Win32, Unix Win32, Unix Win32 C/C++ C/C++, Java, VB C/C++, Java, VB C/C++, Java C/C++ C/C++, VB Rational (IBM) PurifyPlus Software Research Testwell Paterson Technology TCAT CTC++ LiveCoverage Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite. It helps most in the absence of a detailed, up-to-date requirements specification. Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures. Clearly, safety-critical software should have a high goal. We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers. Performance Testing Process & Methodology 205 - Proprietary & Confidential - 29 Test Case points-TCP 29.1 What is a Test Case Point (TCP) TCP is a measure of estimating the complexity of an application. This is also used as an estimation technique to calculate the size and effort of a testing project. The TCP counts are nothing but ranking the requirements and the test cases that are to be written for those requirements into simple, average and complex and quantifying the same into a measure of complexity. In this courseware we shall give an overview about Test Case Points and not elaborate on using TCP as an estimation technique. 29.2 Calculating the Test Case Points: Based on the Functional Requirement Document (FRD), the application is classified into various modules like say for a web application, we can have ‘Login and Authentication’ as a module and rank that particular module as Simple, Average and Complex based on the number and complexity of the requirements for that module. A Simple requirement is one, which can be given a value in the scale of 1 to3. An Average requirement is ranked between 4 and 7. A Complex requirement is ranked between 8 and 10. Complexity of Requirements Requirement Classification Simple (1-3) Average (4-7) Complex (> 8) Total The test cases for a particular requirement are classified into Simple, Average and Complex based on the following four factors. • • • • • Test case complexity for that requirement OR Interface with other Test cases OR No. of verification points OR Baseline Test data Refer the test case classification table given below Performance Testing Process & Methodology 206 Proprietary & Confidential - 1 Test Case Classification Complexity Type Complexity of Test Case Simple Average Complex < 2 transactions 3-6 transactions > 6 transactions Interface withNumber ofBaseline other Test case verification Test Data points Not 0 <2 Required <3 3-8 Required >3 >8 Required A sample guideline for classification of test cases is given below. This adjustment factor has been calculated after a thorough study and analysis done on many testing projects. • • • • Any verification point containing a calculation is considered 'Complex' Any verification point. average and complex test cases. Performance Testing Process & Methodology 207 - Proprietary & Confidential - .2.29.1. the complexity needs to be identified in a similar manner. Based on the test case type an adjustment factor is assigned for simple. which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity Depending on the respective project. The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project. 29. average and complex test case types. Summing up the three results. average and complex test case points. we can get the number of simple. we get the simple.Test Type Case Complexity Adjustment Weight Factor Number Result No of Simple requirements inNumber*Adjust factor A Simple 1 2(A) the project (R1) No of Average requirements inNumber*Adjust factor B Average 2 4(B) the project (R2) No of Complex requirements inNumber*Adjust factor C Complex 3 8(C) the project (R3) Total Test Case Points R1+R2+R3 From the break up of Complexity of Requirements done in the first step. By multiplying the number of requirements with it s corresponding adjustment factor.3 Chapter Summary This chapter covered the basics on      What is Test Coverage Test Coverage measures How does Test coverage tools work List of Test Coverage tools What is TCP and how to calculate the Test Case Points for an application Performance Testing Process & Methodology 208 - Proprietary & Confidential - . we arrive at the count of Total Test Case Points.
Copyright © 2024 DOKUMEN.SITE Inc.