Friday, 20 December 2013

Test Management



            "... Project management throughout the development and implementation process was inadequate and at times ambiguous. A major systems integration project such as CAD
            Requires full time, professional, experienced project management. This was lacking...”

            "... The early decision to achieve full CAD implementation in one phase was misguided. In an implementation as far reaching as CAD it would have been preferable to implement in a step wise approach, proving each phase totally  before moving on to the next...”

Extract from the main conclusions of the official report into the failure of the London Ambulance Service's
                Computer Systems on October 26th and 27th 1992.



5.1 Overview

This module covers the overall management of the test effort for a particular project and attempts to answer several key questions such as:

How many testers do we need?

How shall the testers be organized?

What's the reporting structure and who is in charge?

How will we estimate the amount of testing effort required for this project?

 How do we keep versions of our test material in line with the development deliverables?

How do we ensure the test effort remains on             track?

How do we know that we have finished testing?

What is the process for logging and tracking             incidents?







5.2 Objectives

After completing this module you will:

Understand how testing might be organized.

Understand the different roles in a test team.

Understand the importance of test estimation, monitoring and control.

Appreciate the need for configuration management of the test assets.

Understand how and why incidents must be logged             and tracked.


5.3 Organization

"We trained hard ... but it seemed that every time we were beginning to form up into teams we would be reorganized... I was to learn later in life that we meet any new situation by reorganizing and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, -and demoralization."

A fundamental feature of our lives today is that nothing stays the same. Over time both internal and external pressures on the organizational structures (that we carefully put in place) must change and adapt if our business is to remain competitive. As development and testing organizations grow and evolve, a different structure is required to cope with the changing demands placed upon them. The approach adopted over time may look something like this:

Testing may be each individual developer's responsibility.
Testing is the development team's collective responsibility (either through buddy testing or assigning one person on the team to be the tester).
There is a dedicated independent test team (who do no development).
Internal test consultants 'centers of excellence' provide advice to projects.
A separate company does the testing - this is known as outsourcing.

An excellent description of how the test function can be organized within a company can be found in Ed Kit's book, Software Testing in The Real World [KIT95l



5.5 Configuration management (CM)

We all appreciate the need for testing and assuring quality in our development systems. But how many of us appreciate that Configuration Management is a precursor to these goals?

Configuration Management provides us with the balance to ensure that:

Systems are complete.
Systems are predictable in content.
Testing required is identified.
Change can be ring-fenced as complete.
An audit trail exits.

We've always practiced CM. Such activities an aggregating and releasing software to the production environment may, in the past, have been uncontrolled - but we did it.

Many famous organizations have found the need to develop a standard for CM that they have then since taken into the market place.

Configuration management encompasses much more that simply keeping a version control of your software and test assets, although that is a very good start. Configuration management is crucial to successful testing, especially regression testing because, in order to make repeatable, you must be able to recreate exactly the software and hardware environment that was used in the first instance.

Typical symptoms of poor CM might include:

Unable to match source and object code.
Unable to identify which version of a compiler generated the object code.
Unable to identify the source code changes made in a particular version of the software simultaneous changes are mad e to the same source code by multiple developers (and changes lost).










5.6 Definitions

ISO (International Standards Organization) definition of CM:

.

Configuration management (CM) provides a method to identify, build, move, control and recover any baseline in any part of the life cycle, and ensures that is secure and free from external corruption.


Configuration identification requires that all configuration items (CI) and their versions in test system are known.

Configuration control is maintenance of CI’s in a library and maintenance of records on how CI’s change over time.

Status accounting is the function of recording and tracking problem reports, change requests, etc.

Configuration auditing is the function to check on the contents of libraries, etc. for standards compliance, for instance.

CM can be very complicated in environments where mixed hardware and software platforms are being used, but sophisticated cross platform CM tools are increasingly available.



5. 7 Simple CM life cycle process

CM contains a large number of components. Each component has its own process and contributes to the overall process.

Let's take a look at a simple process that raises a change, manages it through the life cycle and finally executes an implementation to the production environment. Here we can see how to CM life cycle operates by equating actions with aspects of CM.

As a precursor activity we 'Evaluate Change'. All changes are evaluated before they enter the CM Life Cycle:

1.Raise Change Packet,
Uses Change Management functions to identify and register a change. Uses Change Control functions to determine that action is valid and authorized.

2.Add Configurable Item to Change Packet.
Select and assign configurable items to the Change packet. Execute Impact Analysis to determinate the items that also require some action as a result of the change and the order in which the actions take place.

3.Check-In.
Apply version CI back under CM control.

4.Create Executable.
Build an executable for every contained CI in the order indicated.

5.Sign-Off. .
Uses Change Control to verify that the auctioneer signaling that the auctioneer signaling that testing is complete for the environment in which the change is contained is authorized to do so, and that the action is valid.

6.Check-OK to Propagate.
Uses Change Control, Co Requisite to verify request to move a Change Packet through the life cycle is valid.

a) All precursor tacks completed successfully.
b) Next life cycle environment fit to receive.
c) Subsequent change in current environment has not invalidated Change Packet.

7. Propagation
Affect next environment population by releasing Change Packet and then distributing over a wider-area.

Note: You might notice that it is composed of a number of singular functions and series executed as a cycle. Of particular note is that 'Create Executable' is a singular function. This is because we should only ever build once if at all possible. This, primarily, saves time and computer resources. However, re-building an element in a new environment may negate testing carried out in proceeding one and can lead to a lengthy problem investigation phase.

5.8 What does CM control?

CM should control everything element that is a part of a system application.

Nominally, CM:

1. Configurable Items:
Maintains registration and position of all of our CI's. These may be grouped into logically complete change packets as a part of a development of maintenance exercise.

2. Defines Development Life Cycle:
It is composed of a series of transition points, each having its own Entry/Exit criteria and which maps to a specific test execution stage.

3. Movement:
Controls and Change Packet move and progresses it through the Life Cycle.

4. Environment:
Testing takes place in a physical environment configured specifically for the stage testing that the life cycle transition point and stage reflects.

5.9 How is it controlled?

CM is like every other project in that it requires a plan. It is particularly important that CM has a plan of what it is to provide as it forms a framework for life cycle management in which to work consistently and securely.

The CM plan cover what needs to be done, not by when, and defines three major areas:

The Processes will define or include:
Raising a change
Adding elements
Booking elements in and out Exit/Entry criteria
Life cycle definition
Life cycle progression
Impact analysis,
Ring-fencing change, release aggregation
Change controls.
Naming standards
Genetic processes for build and other activities

The Roles & responsibilities covering who and what can be done:
Configuration Manager & Librarian Project Manager, Operations Personnel Users, Developers and others as necessary

Records, providing necessary audit trail will include:
What is managed, the status of the life cycle position arid change status,
Who did what, where, when and under what authority. Also the success factor for activities.
Only once the CM plan and the processes that support it are defined can we consider automation.










5.10     What does CM look like?

CM has several hubs and functions that will make or break it. Hubs of system are defined as areas where information and source code are stored. Typically major hubs are central inventory and central repository. Surrounding those are four major tool sets that allow us to work on the data:

Version Management
Allows us access to any version or revision of a stored element.

Configuration Control Allows us to group elements into manageable sets.

Change Control & Management
This is global name given to processes that govern change through application development life cycle and stages it passes through from an idea through to implementation. It may include:

Change Control Panel or Board to assess and evaluate change;
Controls: Governing who can do what, when and under what circumstances.
Management: Carrying out an action or movement through the life cycle once
                                                the controls have been satisfied

Build & Release
Control is how our elements are built and manner in which our change is propagated through life cycle.

  The view is about as close to genetic global view of CM as you can get. It won't match all tools 100% as it covers all aspects of CM - and very few of the tools (although they might claim to) can do this.








Exercise

Configuration management -1

Make list of items that you think Test Manager should insist placed under configuration management control.



Exercise

Configuration management - 2

There are very many points to consider when implementing CM. We have summarized them into the following three categories:

CM Processes
The framework that specifies how our CM system is to operate and what it is to encompass.

Roles & Responsibilities
Who does what and at what time?

CM Records
The type of records we keep and the manner in which we keep and maintain them.

Quite a short list you might say. Using the information we have learned so far, try and construct a minimal Configuration Management Plan. Do not try and expand the processes required, but give hem suitable titles in an appropriate sequence.
Additionally, for every process you identify, try and match it to one or more segments of the CM Bubble diagram.








5.11     Test estimation, monitoring and control

Test estimation
Effort required to perform activities specified in high-level test plan must be calculated in advance. You must remember to allocate time for designing and writing the test scripts as well as estimating the test execution time. If you are going to use the test automation, there will be a steep learning curve for new people and you must allow for this as well. If your tests are going to run on multiple test environments add in extra time here too. Finally, you will never expect to complete all of the testing in one cycle, as there will be faults to fix and test will have to be re-run. Decide on how many test cycles you will require and try and estimate the amount of re­work (fault fixing and re-testing time).

Test monitoring
Many test efforts fail despite wonderful plans. One of the reasons might be that the test team was so engrossed in detailed testing effort (working long hours, finding many faults) that they did not have time to monitor progress. This however is vitally important if the project is to remain on track. (e.g. use a weekly status report).









Exercise

Try and list what you might think are useful measures for tracking test progress.

Test Manager will have specified some exit (or completion) criteria in the master test plan and will use the monitoring mechanism to help judge when the test effort should be concluded. The test manager may have to report on deviations from the project/test plans such as running out of time before completion criteria have been achieved.

Test control - in order to achieve the necessary test completion criteria it may be necessary to re-allocate resources, change the test schedule, increase or reduce test environments, employ more testers, etc.


5.12 Incident Management

An incident is any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. Incidents are raised when expected and actual test results differ.

5.12.1 what is an incident?

You may now be thinking that incidents are simply another name for faults but his is not the case. We cannot determine at the time an incident has occurred whether there is really a fault in the software, whether environment was perhaps set up incorrectly or whether in fact test script was incorrect. Therefore we log incident and move on to the next test activity.

5.12.2 Incidents and the test process

An incident occurs whenever an error, query or problem arises during the test process. There must be procedures in place to ensure accurate capture of all incidents. Incident recording begins as soon as testing is introduced into system's development life cycle. First incidents that will be raised therefore are against documentation as project proceeds; incidents will be raised against database designs, and eventually program code of system under test.





5.12.3 Incident logging

Incidents should be logged when someone other than author of product under test performs testing. When describing incident, diplomacy is required to avoid unnecessary conflicts between different teams involved in testing process (e.g. developers and testers). Typically, information logged on an incident will include:


. Name of tester(s), data/time of incident, Software under test ID
. Expected and actual results
. Any error messages
. Test environment
. Summary description
. Detailed description including anything deemed relevant to reproducing/fixing potential fault (and continuing with work)
. Scope
. Test case reference
. Severity (e.g. showstopper, unacceptable, survivable, trivial)
. Priority (e.g. fix immediately, fix by release date, fix in next release)
 . Classification. Status (e.g. opened, fixed, inspected, retested, closed)
. Resolution code (what was done to fix fault)

Incidents must be graded to identify severity of incidents and improve quality of reporting information. Many companies use simple approach such as numeric scale of I to 4 or high, medium and low. Beizer has devised a list and weighting for faults as follows:


1
Mild
Poor alignment, spelling etc.
2
Moderate
Misleading information, redundant information
3
Annoying
Bills for 0.00, truncation of name fields etc.
4
Disturbing
Legitimate actions refused, sometimes it works, sometimes not
5
Serious
Loss of important material, system loses track of data, records etc.
6
Very serious
The mis-posting of transactions
7
Extreme
Frequent and widespread mis-postings
8
Intolerable
Long term errors from which it is difficult or impossible to recover
9
Catastrophic
Total system failure or out of control actions
la
Infectious
Other systems are being brought down



In practice, in the commercial world at least, this list is over the top and many companies use a simple approach such as numeric scale of 1 to 4 as outlined below:

1
Showstopper
Very serious fault and includes GPF, assertion failure or


complete system hang
2
Unacceptable
Serious fault where software does not meet business


requirements and there is no workaround
3
Survivable
Fault that has an easy workaround - may involve partial manual


operation
4
Cosmetic
Covers trivial faults like screen layouts, colors, alignments, etc



Note that incident priority is not the same as severity. Priority relates to how soon the fault will be fixed and is often classified as follows:

1. Fix immediately.
2.Fix before the software is released.
3.Fix in time for the following release.
4.No plan to fix.

It is quite possible to have a severity 1 priority 4 incident and vice versa although the majority of severity 1 and 2 faults are likely to be assigned a priority of 1 or 2 using the above scheme.

5.12.4 Tracking and analysis

Incidents should be tracked from inception through various stages to eventual close-out and resolution. There should be a central repository holding the details of all incidents.

For management information purposes it is important to record the history of each incident. There must be incident history logs raised at each stage whilst the incident is tracked through to resolution for trace ability and audit purposes. This will also allow ht formal documentation of the incidents (and the departments who own them) at a particular point in time.

Typically, entry and exit criteria take the form of the number of incidents outstanding by severity. For this reason it is imperative to have a corporate standard for the severity levels of incidents.

Incidents are often analyzed to monitor test process and to aid in test process improvement. It is often useful to look at sample of incidents and try to determine the root cause.

5.13     Standards for testing

There are now many standards for testing, classified as QA standards, industry-specific standards and testing standards. These are briefly explained in this section. QA standards simple specify that testing should be performed, while industry-specific standards specify what level of testing to perform. Testing standards specify how to perform testing.

Ideally testing standards should be referenced from the other two.

The following table gives some illustrative examples of what we mean:

Type
Standard
QA Standards
ISO 9000
Industry specific
Railway signaling
standard
standard
Testing Standards
BS 7925-1, BS 7925-2




























5.14 Summary.

In module five you have learnt that the Test Manager faces an extremely difficult challenge in managing the test team and estimating and controlling a particular test effort for a project. In particular you can now:

Suggest five different ways in which a test team might be organized.

Describe at least five different roles that a test team might have.

Explain why the number of test cycles and re-work costs are important factors in estimating.

Describe at least three ways that a test effort can be monitored.

 List three methods of controlling the test effort to achieve the necessary completion criteria.

Prioritize incidents.

Understand the importance of logging all incidents.

 Understand the need for tracking and analysis of incidents.


 For More Testing Techniques:







Flipkart:
http://www.flipkart.com/advanced-test-strategy-istqb-foundation-questions-answers-included/p/itmdp9yzkgedxghz?pid=9781482812220

Amazon:
http://www.amazon.com/Advanced-Test-Strategy-Foundation--Questions-ebook/dp/B00FKS462K/

Tool Support for Testing







6.1 Overview

When people discuss testing tools they invariably think of automated testing tools and in particular capture/replay tools. However, the market changes all the time and this module is intended to give you a flavor of the many different types of testing tool available. There is also a discussion about how to select and implement a testing tool for your organization. Remember the golden rule, if you automate a mess, you'll get automated chaos; choose tools wisely!

6.2 Objectives

After completing this module you will be able to:

» Name up to thirteen different types of testing tools.

» Explain which tools is in common use today and why.

» Understand when test automation tools are appropriate and when they are not.

» Describe in outline a tool selection process.

6.3 Types of CAST tools

There are numerous types of computer-aided software testing (CAST) tool and these are briefly described below.

Requirements testing tools provide automated support for the verification and validation of requirements models, such as consistency checking and animation.

Static analysis tools provide information about the quality of the software by examining the code, rather than buy running test cases through the code. Static analysis tools usually give objective measurements of various characteristics of the software, such as the cyclomatic complexity measures and other quality metrics.

Test design tools generate test cases from a specification that must normally be held in a CASE tool repository or from formally specified requirements held in the tools itself. Some tools generate test cases from an analysis of the code.






Test data preparation tools enable data to be selected from existing databases or created, generated, manipulated and edited fro use in tests. The most sophisticated tools can deal with a range of file and database formats.

Character-based test running tools provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminal keystrokes and capture screen responses for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

GUI test running tools provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboard inputs and can recognize GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language; data, test cases and expected results may be held in separate test repositories. These tools are most often used to automate regression testing.

Test harnesses and drivers are used to execute software under test, which may not have a user interface, or to run groups of existing automated test scripts, which can be controlled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where code or other systems are either unavailable or impracticable to use (e.g. testing software to cope with nuclear meltdowns).

Performance test tools have two main facilities: load generation and test transaction measurement. Load generation is done either by driving application using its user interface or by test drivers, which simulate load generated by application on architecture. Records of numbers of transactions executed are logged. Driving application using its user interface, response time measurements are taken for selected transactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.

Dynamic analysis tools provide run-time information on state of executing software. These tools are most commonly used to monitor allocation, use and de­-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errors difficult to find 'statically'.

Debugging tools are mainly used by programmers to reproduce bugs and investigate the state of programs. Debuggers enable programmers to execute programs line by line, to halt program at any program statement and to set and examine program variables.





Comparison tools are used. to detect differences between actual results and expected results. Standalone comparison tools normally deal with a range of file or database formats. Test running tools usually have built-in comparators that deal with character screens, Gill objects or bitmap images. These tools often have filtering or masking capabilities, whereby they can 'ignore' rows or columns of data or areas on screens.

Test management tools may have several capabilities. Test ware management is concerned with creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support project management aspects of testing, for example, scheduling of tests, logging of results and management of incidents raised during testing. Incident management tools may also have workflow-oriented facilities to track and control allocation, correction and retesting of incidents. Most test management tools provide extensive reporting and analysis facilities.

Coverage measurement (or analysis) tools provide objective measures of structural test coverage when test are executed. Programs to be tested are instrumented before compilation. Instrumentation code dynamically captures coverage data in a log file without affecting functionality of program under test. After execution, log file is analysed and coverage statistics generated. Most tools provide statistics on most common coverage measures such as statement or branch coverage.


6.4 Tool selection and implementation

There are many test activities, which can be automated, and test execution tools are not necessarily first or only choice. Identify your test activities where tool support could be of benefit and prioritize areas of most importance.

Fit with your test process may be more important than choosing tool with most features in deciding whether you need a tool, and which one you choose. Benefits of tools usually depend on a systematic and disciplined test process. If testing is chaotic, tools may not be useful and may hinder testing. You must have a good process now, or recognize that your process must improve in parallel with tool implementation. The ease by which CAST tools can be implemented might be called 'CAST readiness’.

Tools may have interesting features, but may not necessarily be available on your platforms, e.g., 'works on 15 flavors of Unix, but not yours...’ Some tools, e.g. performance testing tools, require their own hardware, so cost of procuring this hardware should be a consideration in your cost benefit analysis. If you already have tools, you may need to consider level and usefulness of integration with other tools, e.g., you may want test execution tool to integrate with your existing test management tool (or vice versa). Some vendors offer integrated toolkits, e.g. test execution, test management, performance-testing bundles. Integration between some tools may bring major benefits, in other cases; level of integration is cosmetic only.





Once automation requirements are agreed, selection process has four stages:

Creation of a candidate tool shortlist.
Arrange demos.
Evaluation(s) of selected tool(s).
Review and select tool.

Before making a commitment to implementing the tool across all projects, a pilot project is usually undertaken to ensure the benefits of using the tool can actually be achieved. The objectives of the pilot are to gain some experience in use of the tools, identify changes in the test process required and assess the actual costs and benefits of implementation. Roll out of the tool should be based on a successful result from the evaluation of the pilot. Roll -out normally requires strong commitment from tool users and new projects, as there is an initial overhead in using any tool in new projects.

Exercise

Incident management system

List some of your requirements for an incident management system.


6.5 Summary

In module six you have learnt that in particular you can now:

Understand there are many different types of testing tool to support the test process.

Understand what CAST stands for.

Understand that you must have a mature test process before embarking on test automation.

Know why you must define requirements           for a tool prior to purchasing one.




 For More Testing Techniques:


Flipkart:
http://www.flipkart.com/advanced-test-strategy-istqb-foundation-questions-answers-included/p/itmdp9yzkgedxghz?pid=9781482812220

Amazon:
http://www.amazon.com/Advanced-Test-Strategy-Foundation--Questions-ebook/dp/B00FKS462K/

Thursday, 19 December 2013

Static testing

STATIC TESTING


4.1 Overview

Static testing techniques is used to find errors before software is actually executed and contrasts therefore with dynamic testing techniques that are applied to a working system. The earlier we catch an error, the cheaper it is, usually, to correct. This module looks at a variety of different static testing techniques. Some are applied to documentation (e.g. walkthroughs, reviews and Inspections) and some are used to analyze the physical code (e.g. compilers, data flow analyzers). This is a huge subject and we can only hope to give an introduction in this module. You will be expected to appreciate the difference between the various review techniques and you will need to be aware of how and when static analysis tools are used.

4.2 Objectives

After completing this module you will:

§  Understand the various review techniques that you can apply to documentation and code.
§  Appreciate the difference between walkthroughs, formal reviews and inspections.
§  Understand how static analysis techniques can detect errors in code.
§  Understand two complexity metrics (lines of code and McCabe's metric).












4.3 REVIEWS AND THE TEST PROCESS

4.3.1 What is a review?

A review is a fundamental technique that must be used throughout the development lifecycle. Basically a review is any of a variety of activities involving evaluation of technical matter by a group of people working together. The objective of any review is to obtain reliable information, usually as to status and/or work quality.

4.3.2 Some history of reviews

During any project, management requires a means of assessing a measuring progress. The so-called progress review evolved as means of achieving this. However, results of those early reviews proved to be bitter experiences for many project managers. Just how long can a project remain at 90% complete? They found that they could not measure 'true' progress until they had a means of gauging the quality of the work performed. Thus the concept of the technical review emerged to examine the quality of the work and provide input to the progress reviews.

4.3.3 What can be reviewed?

There are many different types of reviews throughout the development life cycle. Virtually any work produced during development can be (and is) reviewed. This includes, requirements documents, designs, database specifications, designs, data models, code, test plans, test scripts, test documentation and so on.

4.3.4 What has this got to do with testing?

The old fashioned view that reviews and testing are totally different things stems from the fact that testing used to be tacked onto the end of the development lifecycle. However as we all now view testing as a continuous activity that must be started as early as possible you can begin to appreciate the benefits of reviews. Reviews are the only testing technique that is available to us in the early stages of testing. At early stages in the development lifecycle we obviously cannot use dynamic testing techniques since the software is simply not ready for testing.

Reviews share similarities to test activities in that they must be planned (what are we testing), what are the criteria for success (expected results) and who will do the work (responsibilities). The next section examines the different types of review techniques in more detail.






4.4 TYPES OF REVIEW

Walk-through, informal reviews, technical reviews and Inspections are fundamental techniques that must be used throughout the development process. All have their strengths and weaknesses and their place in the project development cycle. All four techniques have some ground rules in common as follows:

A structured approach must be for the review process.

Be sure to know what is being reviewed - each component must have a unique identifier.

·         Changes must be configuration controlled.
·         Reviewers must prepare.
·         Reviewers must concentrate on their own specialization.
·         Be sure to review the product, not the person.

There must be:

Ø  Total group responsibility.
Ø  Correct size of review group.
Ø  Correct allocation of time.
Ø  Correct application of standards.

Checklists must be used.
Reports must be produced.
Quality must be specified in terms of:

Ø  Adherence to standards.
Ø  Reliability required.
Ø  Robustness required.
Ø  Modularity.
Ø  Accuracy.
Ø  Ability to handle errors.







4.5 REVIEW TEAM SIZE AND COMPOSITION

Problems with small teams must be avoided; bring in extra people, (perhaps use the Testing Team) to bring extra minds to bear on the issues.

Opinion is often divided as to whether or not the author should participate in a review. There are advantages to both scenarios. Since specifications and designs must be capable of being understood without the author present, an Inspection without them tests the document. Another reason for excluding the author from the review is that there should be team ownership of the product and team responsibility for the quality of all the deliverables, maintaining ownership via the author runs against this.

Alternatively, including the author can be a valuable aid to team building. Equally, an author may well be able to clear up misunderstandings in a document more quickly than another team member, thus saving the reviewer valuable time. From a practical perspective however, it is worth remembering that an author is the least likely person to identify faults in the document.

The one person who should not attend reviews is the manager. If, as in some cases, the manager is also a contributor to the deliverables, he should be included but treated the same as other group members. It is important that reviewers are a peer group process.

4.6 TYPE 1, 2 AND 3 REVIEW PROCESSES

Review process is both most effective and universal test method and management needs to make sure that review process is working as effectively as possible. Useful model for manager is 1,2,3 model.

1, 2, 3 model is derived from work of first working party of British Computer Society Specialist Interest Group in Software Testing and book that came from this work; in Software Development.

Type 1 testing is process of making sure that product (document, program, screen design, clerical procedure or Functional Specification) is built to standards and contains those features that we would expect from the name of that product. It is a test to make sure that produce conforms to standards, is internally consistent, accurate and unambiguous.

Type 2 testing is process of testing to see if product conforms to requirements as specified by output of preceding project stage and ultimately Specifications of Requirements for whole project. Type 2 testing is backward looking and is checking that product is consistent with preceding documentation (including information on change).




Type 3 testing is forward looking and is about specification of certification process and test that are to be done on delivered product. It is asking question - Can we can build deliverables (test material, training material, next stage analysis documentation)?


Type 2
 

Type1
 
                                    


Type 3
 
 

                        




4.6.1 Make reviews incremental

Whilst use of 1, 2, 3 model will improve review technique, reviewing task can be made easier by having incremental reviews throughout product construction.

This will enable reviewers to have more in-depth understanding of product that they are reviewing and to start construction type 3 material.

4.6.2 General review procedure for documents

Test team will need general procedure for reviewing documents, as this will probably form large part of team's work.

1. Establishing standards and format for document. 2. Check contents list.
3. Check appendix list.
4. Follow up references outside documents.
5. Cross check references inside document.
6. Check for correct and required outputs.
7. Review each screen layout against appropriate standard, data dictionary, processing rules and files/data base access.
     8. Review each report layout against appropriate standard, data dictionary, processing rules and files/data base access.
9. Review comments and reports on work andreviews done prior to this review.
    10. Documents reviewed will range from whole reports such as Specification of Requirements to pages from output that system is to produce. All documents will need careful scrutiny.

4.6.3 Report on review
Report should categorize products:

Defective
All agreed
Total rework
Defective but
All agreed
Rework and
soluble
solution
review all

acceptable
material
Possible
Some but not all
Seek explanation
defect, needs
agree
possibly review
explanation

some of work
Quality issue
Prefer an
Costs compared

alternative
to standards
Acceptable
All accept
Proceed
Over
Most agree
Proceed but
developed for

review budgets
the budget










Be sensitive to voice of concerned but not necessarily assertive tester in team; this person may well have observed a fault that all others have missed. It should not be that person with load voice or strong personality is allowed to dominate.

4.7 INSPECTIONS

This section on Inspections is based on an edited version of selected extracts from Tom Gib and Dorothy Graham's book [Gib, Graham 93].

4.7.1 Introduction

Michael E. Fagan at IBM Kingston NY Laboratories developed the Inspection technique. Fagan, a certified quality control engineer, was a student of the methods of the quality gurus W. Edwards Deming and J. M. Juran.

Fagan decided, on his own initiative, to use industrial hardware statistical quality methods on a software project he was managing in 1972-74. The project consisted of the translation of IBM’s software from Assembler Language to PLS, a high-level programming language assembler. Fagan's achievement was to make statistical quality and process control methods work on 'ideas on paper'. In 1976 he reported his results outside IBM in a now famous paper [Fagan, 1976].

The Inspection technique was built further by Caroline L. Jones and Robert Mays at IBM [jones, 1985] who created a number of useful enhancements:

§  The kickoff meeting, for training, goal setting, and setting a strategy for the current inspection
§  Inspection cycle;
§  The causal analysis meeting;
§  The action database;
§  The action team.

4.7.2 Reviews and walk-through

Reviews and walkthroughs are typically peer group discussion activities - without much focus on fault identification and correction, as we have seen. They are usually without the statistical quality improvement, which is an essential part of Inspection. Walkthroughs are generally a training process, and focus on learning about a single document. Reviews focus more on consensus and buy-in to a particular document.








It may be wasteful to do walkthroughs or consensus reviews unless a document has successfully exited from Inspection. Otherwise you may be wasting people's time by giving them documents of unknown quality, which probably contain far too many opportunities for misunderstanding, learning the wrong thing and agreement about the wrong things.

Inspection is not an alternative to wal1cthroughs for training, or to reviews for consensus. In some cases it is a pre-requisite. The difference processes have different purposes. You cannot expect to remove faults effectively with walkthroughs, reviews or distribution of documents for comment. However, in other cases it may be wasteful to Inspect documents, which have not yet 'settled down' technically. Spending time searching for and removing faults in large chucks, which are later discarded, is not a good idea. In this case it may be better to aim for approximate consensus documents. The educational walkthrough could occur either before or after Inspection.


4.7.3 Statistical quality improvement

The fundamental differences between Inspection and other review methods is that Inspection provides a tool to help improve the entire development process, through a well-known quality engineering method, statistical process control, [Godfrey, 1986].

This means that the data which is gathered and analyzed as part of the Inspection process - data on faults, and the hours spent correcting them - is used to analyze the entire software engineering process. Widespread weakness in a work process can be found and corrected. Experimental improvement to work processes can be confirmed by the Inspection metrics - and confidently spread to other software engineers.

4.7.4 Comparison of Inspection and testing

Inspection and testing both aim at evaluating and improving the quality of the software engineering product before it reaches the customers. The purpose of both is to find and then fix errors, faults and other potential problems.

Inspection and testing can be applied early in software development, although Inspection can be applied earlier than test. Both Inspection and test, applied early, can identify faults, which can then be fixed when it is still much cheaper to do so.

Inspection and testing can be done well or badly. If they are done badly, they will not be effective at finding faults, and this causes problems at later stages, test execution, and operational use.









We need to learn from both Inspection and test experiences. Inspection and testing should both ideally (but all too rarely in practice) produce product-fault metrics and process­ improvement metrics, which can be used to evaluate the software development process. Data should be kept on faults found in Inspection, faults found in testing, and faults that escaped both Inspection and test, and were only discovered in the field. This data would reflect frequency, document location, security, cost of finding, and cost of fixing.

There is a trade-off between fixing and preventing. The metrics should be used to fine-tune the balance between the investment in the fault detection and fault prevention techniques used. The cost of Inspection, test design, and test running should be compared with the cost of fixing. The faults at the time they were found, in order to arrive at the most cost-effective software development process.

4.7.5 Differences between Inspection and testing

Inspection can be used long before executable code is available to run tests. Inspection can be applied mud earlier than dynamic testing, but can also be applied earlier than test design activities. Test can only be defined when a requirements or design specification has been written, since that specification is the source for knowing the expected result of a test execution.

The one key thing that testing does and Inspection does not, is to evaluate the software while it is actually performing its function in its intended (or simulated) environment. Inspection can only examine static documents and models; testing can evaluate the product working.

Inspection, particularly the process improvement aspect, is concerned with preventing software engineers from inserting any form of fault into what they write. The information gained from faults found in running tests could be used in the same way, but this is rare in practice.

4.7.6 Benefits of Inspection

Opinion is divided over whether Inspection is a worthwhile element of any product 'development' process. Critics argue that it is costly; it demands too much 'upfront' time and is unnecessarily bureaucratic. Supporters claim that the eventual savings and benefits outweigh the costs and the short-term investment is crucial for long-term savings.









In IT-Start's Developers Guide (1989), it is estimated that 'The cost of non-quality software typically accounts for 30% of development costs and 50% to 60% of the lifecycle costs'. Faults then are costly, and this cost increases the later they are discovered. Inspection applied to all software is arguably the prime technique to reduce defect levels (in some cases to virtually zero defects) and to provide an increased maturity level through the use of Inspection metrics. The savings can be substantial.

Direct savings


Development productivity is improved.
Fagan, in his original article, reported a 23% increase in 'coding productivity alone' using Inspection [Fagan, 1976, IBM Systems Journal, p 187]. He later reported further gains with the introduction of moderator training, design and code change control, and test fault tracking.

Development timescale is reduced.
Considering only the development timescales, typical net savings for project development are 35% to 50%.

Cost and time taken for testing is reduced.
Inspection reduces the number of faults still in place when testing starts because they have been removed at an earlier stage. Testing therefore runs more smoothly, there is less debugging and rework and the testing phase is shorter. At most sites Inspection eliminates 50% to 90% of the faults in the development process before test execution starts.

Lifetime costs are reduced and software reliability increased.
Inspection can be expected to reduce total system maintenance costs due to failure reduction and improvement in document intelligibility, therefore providing a more competitive product.

Indirect savings


Management benefits.
Through Inspection, managers can expect access to relevant facts and figures about their software engineering environment, meaning they will be able to identify problems earlier and understand the payoff for dealing with these problems.

Deadline benefits.
Although it cannot guarantee that an unreasonable deadline will be met, through quality and cost metrics Inspection can give early warning of impending problems, helping avoid the temperature of inadequate correction nearer the deadline.





Organizational and people benefits.
For software professionals Inspection means their work is of better quality and more maintainable. Furthermore, they can expect to live under less intense deadline pressure. Their work should be more appreciated by management, and their company's products will gain a competitive edge.

4.7.7 Costs of Inspection

The cost of running an Inspection is approximately 10% - 15% of the development budget. This percentage is about the same as other walkthrough and review methods. However, Inspection finds far more faults for the time spent and the upstream costs can be justified by the benefits of early detection and the lower maintenance costs that result.


As mentioned earlier, the costs of Inspection include additional 'up front' time in the development process and increased time spent by authors writing documents they know will be Inspected. Implementing and running Inspections will involve long-term costs in new areas. An organization will find that time and money go on:

Ø  Inspection leading training.
Ø  Management training.
Ø   Management of the Inspection leaders.
Ø  Metric analysis.
Ø  Experimentation with new techniques to
       try to improve Inspection results.
Ø  Planning, checking and meeting activity: the entire Inspection process itself.
Ø  Quality improvement: the work of the process improvement teams.

The company may also find it effective to consider computerized tools for documentation and consistency checking. Another good investment might be improved meeting rooms or sound insulation so members of the Inspection team can concentrate during checking.­






4.7.8 Product Inspection steps
§  The Inspection process is initiated with a request for Inspection by the author or owner of a task product document.
§  The Inspection leader checks the document against entry criteria, reducing the probability of wasting resources on a product destined to fail.
§  The Inspection objectives and tactics are planned. Practical details are decided upon and the leader develops a master plan for the team.
§   A kickoff meeting is held to ensure that the checkers are aware of their individual roles and the ultimate targets of the Inspection process.
§  Checkers work independently on the product, document using source documents, rules, procedures and checklists. Potential faults are identified and recorded.
§  A logging meeting is convened during which potential faults and issues requiring explanations, identified by individual checker, are logged. The checkers now work as a team aiming to discover further faults. And finally suggestions for methods of improving the process itself are logged.
§  An editor (usually the author) is given the log of issues to resolve. Faults are now classified as such and a request for permission to make the correction and improvements to the product is made to the document's owner. Footnotes might be added to avoid misinterpretation. The editor may also make further process improvement suggestions.
§  The leader ensures that the editor has taken action to correct all known faults, although the leader need not check the actual corrections.
§  The exit process is performed by the Inspection leader who uses application generic and specific exit criteria.
§  The Inspection process is closed and the product made available with an estimate of the remaining faults in a 'warning label'.








Exercise

Comparison between Various Techniques

Take a few moments to complete the following table.

TECHNIQUE
Primarily Used for
INVOLVES
LED BY
FORMALITY
Walkthroughs
Education; Dry runs
Peer group
Author
Fairly informal
Informal reviews
Fault detection
Anyone
Individual
Undocumented; Cheap. Useful
Technical reviews
Fault detection
Peer group; Technical experts; No managers
Individual
Formal; Documented; No metrics kept
Inspections
Fault detection; Process improvement
Defined roles
Trained moderator (not the author)
Very formal Rules, checklists Metrics kept


4.9.2 McCabe’s complexity metric

McCabe's complexity metric is a measure of the complexity of a module's decision structure. It is the number of linearly independent paths and therefore, the minimum number of paths that should be tested. The metric can be calculated in three different ways. The number of decisions plus one, the number of 'holes' or connected regions (bearing in mind that there is a fictitious link between the entry and exit of the program), or thirdly the equation:

M=L-N+2P

Where: L = the no. Of links in graph

N = the no. Of nodes in the graph

P = the no. Of disconnected parts of the graph

Despite its simplicity the McCabe metric is based on deep properties of program structure. The greatest advantage is that it is almost as easy to calculate as the 'lines of code' metric, and results in a considerably better correlation of complexity to faults and the difficulty of testing.

McCabe advises partitioning programs where complexity is greater than la and this has been supported by studies such as Walsh who found that 23% of the routines with an M value of greater than 10 contained 53% of the faults. There does appear to be a discontinuous jump in the fault rate around M = 10. As an alternative to partitioning, others have suggested that the resources for development and testing should be allocated in relation to the McCabe measure of complexity, giving greater attention to modules ht exceed this value.

The weakness of the McCabe metric is found in the assumption that faults are proportional to decision complexity, in other words that processing complexity and database structure, amongst other things, are irrelevant. Equally it does not distinguish between different kinds of decisions. A simple "IF-THEN-ELSE" statement is treated the same as a relatively complicated loop yet we intuitively know that the loop is likely to have more faults. Also CASE statements are treated the same as nested IF statements which is again counter intuitive





 For More Testing Techniques:


Flipkart:
http://www.flipkart.com/advanced-test-strategy-istqb-foundation-questions-answers-included/p/itmdp9yzkgedxghz?pid=9781482812220

Amazon:
http://www.amazon.com/Advanced-Test-Strategy-Foundation--Questions-ebook/dp/B00FKS462K/